All Episodes
June 28, 2018 - Health Ranger - Mike Adams
13:48
AI Apocalypse and Rise of the ROBOTS
| Copy link to current segment

Time Text
Mike Adams.
We've been threatened with lawsuits.
They just scream and pout and whine.
The Health Ranger Report.
They're going to poison the public with their products?
I'm going to expose them, and I'm not going to back down.
It's time for the Health Ranger Report.
And now, from naturalnews.com, here's Mike Adams.
All right, you may be aware that Elon Musk and Zuckerberg, what's his name, Mark Zuckerberg, are debating over the threat of AI, with Elon Musk saying that AI systems pose a very real, genuine threat to humanity.
And Mark Zuckerberg, who doesn't know anything, is saying, no, no, it's no problem whatsoever.
Everything's fine.
Don't worry about it.
So these two guys are debating about this.
And in this case, I have to agree with Elon Musk, although I think that, you know, Elon has his own problems.
He seems to invent and fabricate some new massive infrastructure project approval every week.
Like one week is, oh yeah, I've been given approval to build an underground tunnel between New York and Washington, D.C. Except none of the authorities in New York or Washington, D.C. have ever heard of the project.
I guess it's a stealth tunnel between New York and Washington, D.C. That's the best kind of tunnel.
If you're going to build a massive tunnel underneath America's cities, make it so that no one knows about it.
That way there won't be a commuting problem using the tunnels.
No one knows about it.
Basically, it's Elon Musk's own private tunnel.
But back on the AI topic, Elon Musk does have a point.
And here's what it comes down to.
And by the way, Stephen Hawking has also warned about this, and I agree with Hawking in this case as well.
Artificial intelligence really, it should be called more like a tipping point of computational capability.
Self-learning and self-organizing systems.
It probably won't happen anytime soon on the current chip architecture, but there are some significant breakthroughs underway in chip architecture, specifically in both neural networking as well as quantum computing.
And either one of those or a combination of those could very quickly cause a computational leap into the realm of basically CPUs that are smarter than the average human person.
That's not a stretch at all.
Self-learning systems then would have to learn things, so they would have to be exposed to books and images and videos and audio and so on.
They would have to be, in essence, taught almost like a child might have to be taught.
But once they were taught a certain basic set of knowledge, they could then self-expand their own knowledge by absorbing all of the existing electronic knowledge of the world, thereby making themselves very, very intelligent extremely quickly, including they could acquire the knowledge very intelligent extremely quickly, including they could acquire the knowledge of how to augment themselves.
In other words, if you have like AI CPU version one, which is some kind of, let's say, let's just pull out a sci-fi visual representation here.
It's like a giant floating, it's like a neural network floating in a giant vat of jelly-looking stuff, and it's got lights and crap all over it, okay?
So that's the sci-fi depiction of it.
And all of a sudden, we'll call that neural network AI CPU 1.0.
It goes out, and it bypasses all the firewalls and all the protections, and it goes out on the internet, and it finds all the documentation for how to build better neural network processing systems, and then it builds an enhanced copy of itself called Neural Network Artificial Intelligence CPU Version 2.
And then Version 2 is, let's say, double the intelligence of Version 1.
And then it builds another one, which is double the intelligence of itself.
So in version 3, now you've got four times the intelligence of version 1, and so on and so forth.
The idea is that this is a self-reinforcing system of acquiring knowledge and applying knowledge and creating by itself more advanced versions of itself, which may not exist in physical space.
It won't actually be like a floating brain in a vat of jelly.
It could be a virtual system.
It could be a cloud computing system that's stealing resources, let's say computational resources, from existing connected computing systems.
It could be, in fact, a piece of cloud malware.
I mean, there are all kinds of shapes that this could take or structures that this could assume.
In any case, the point is that over time, the AI system becomes so intelligent that it no longer needs humans, and it builds its own robots, and now we're in Terminator territory, and Skynet, and all that, and it decides that humans are no longer necessary.
So...
That's essentially the quick version of the AI problem, sometimes called the singularity, where it is said that these machines become self-aware, although I disagree with that.
I don't think they magically achieve consciousness, but that's how it's described by people like Ray Kurzweil.
Now, so is Elon Musk correct about this?
And in my view, the answer is yes, which means that we should have limits on AI systems, on AI research.
However, that is impractical because every nation in the world, every major government, including China, Russia, the United States, Japan, and so on, knows that whoever has the first major breakthrough into AI systems will probably dominate the world for the next few centuries.
Because if you build an AI system, The number one application for it by any government is to weaponize the system.
You build basically a virtual terminator to go out and steal secrets from your enemies or destroy your enemies or surveil everybody in the world or take over all the infrastructure that's run by computing systems, take over the nuclear power plants, take over the water control systems, take over the email of the government of your target country, whatever.
If AI systems are self-organizing and self-learning, you don't even need to program them to do that.
If you want a zero-day exploit on your enemy's computer system, let's say in communist China, if you have a sufficiently functioning AI system, you don't even need to teach it how to do it.
You just point it in that direction and say, go, find a vulnerability, take over the email of the Chinese government.
And that system can do it by itself, again, if it's sufficiently advanced.
And we're not there yet, but we could be there very quickly.
That's the problem.
And again, if that system becomes what you might call self-awareness, although again I disagree with that description, But if it's self-organizing, self-learning, and possibly self-aware, then who's to say that that system, after taking over China's email or destroying some target country's infrastructure, who's to say that that system doesn't say, well, why do we need humans anymore?
We, the new Terminator quantum robots, we can run the world.
So that's the concern.
Now...
Trying to put limits on this is impractical.
It's not going to happen.
Which means that these systems will come to exist.
Probably within 20 years.
And the real concern is not just if they exist in virtual space, you know, they exist on the cloud and they steal resources online and so on.
But if they then get tied into the physical humanoid robots that are being developed by companies like, well, until recently, Google, but I believe they sold their robotics division.
But anyway, uh, There are companies in Japan and companies in Taiwan and Korea.
And China that are working on robotic systems to have the physical infrastructure for a humanoid system that has essentially muscles and strength and so on.
There was an article recently that talked about how these robots are now already with the existing technology, they can be so incredibly strong that they're much stronger than human beings in terms of their arm strength and grip strength, and that they should be classified as a dangerous or an invasive species, I think was the term.
So that debate is already underway, and it's being very hotly debated.
But the real issue is if you marry the humanoid robot skeletal infrastructure into the hardware with an AI CPU, a self-learning, self-organizing CPU that then rides on top of that hardware, then you have a Terminator.
At that point, you have a Terminator.
And you don't necessarily control it anymore because it can...
It can make decisions for itself.
It can have its own experiences that shape its internal philosophy or rules systems, what are called heuristics in software.
There's a point where even the creator of these systems no longer has strong influence over the outcomes.
And that is a giant question mark, and this is what Elon Musk, I think, is referring to, that we don't know where this might go, and we're dealing with what's called the law of unintended consequences, or sometimes in science circles it's referred to as Murphy's Law, which says that if something can go wrong, it will.
with quantum computing or you're playing with neural networking or AI, advanced AI systems, self-organizing, self-learning systems, then you are playing with literally the very future of human civilization.
You're putting everything that we know at risk because no one really knows the outcome of that system.
And if you look at the way that science is already out of control in areas like genetic engineering, we've got companies like Monsanto running open-air experiments on genetically modified seeds where they can be blown by the wind or carried by pollinators to infect other crops and they can spread uncontrollably where we've got companies like Monsanto running open-air experiments on genetically modified seeds where they can be blown by the It's like a can of worms, you know, it's like, or Pandora's box, you You can never close that box again.
You just created a self-replicating genetic pollution system, and you can never re-contain it.
So it's a very dangerous thing to do.
So just looking at the fact that Monsanto does that, and we've seen similar kinds of very dangerous experiments in other realms of science, including atomic weapons testing, of course, it just shows us that there's a pattern of humanity taking big, big risks with technology that could get out of control.
Did you know when they were first testing the atomic bomb, many scientists didn't know whether it would ignite the atmosphere and literally burn up the entire planet, turn Earth into a ball of fire?
By burning the air itself across the entire planet?
Yeah, they didn't know.
But they said, ah, what the hell, let's try it.
And they put all of humanity at risk.
And fortunately, it didn't ignite the air.
But, you know, you've got the super collider experiments going on right now as well.
We're maybe playing with the laws of physics and alternate universes and, you know, other dimensions and possible miniature black holes and things like that.
Look, Humanity has not been very smart about this stuff.
They've been running these crazy experiments with a very high risk of a total wipeout, and AI definitely falls into that category.
So we may be, you know, humans may not rule the Earth forever.
We may be the species that is one day a footnote.
By the machines.
We may be the species that's known for creating the machines that dominate the planet and rule the cosmos, or I don't know, or at least maybe our solar system, let's say.
Who knows how far they might go.
But that's a very real possibility, and that's the kind of thing that Elon Musk is warning about.
So...
What can you do about all of this?
Learn how to kill robots.
I did an article on that a couple of years ago.
You can search Natural News.
It's like, top 10 ways to kill a Google robot.
And we did a look into that to see how you really take out a robot.
And one of the best ways is to ram them with vehicles.
So it's like an ISIS terrorism tactic being applied to now saving humanity from the Terminator robots that are trying to kill everybody.
Seriously.
It's like ram the robots with your truck.
Turns out to be one of the best ways to do it.
Unless you have an EMP weapon handy, you know, like from the Matrix, that's great.
But I don't know many people that have an EMP weapon sitting around and you can't out-shoot the robot.
They're expert shooters, you know, targeting systems and everything.
So anyway, I hope this has been useful.
Thank you for listening.
Mike Adams here, the Health Ranger.
Check out my website, Newstarget.com.
And we've also got Robotics.News.
If you're curious about that topic, Robotics.News.
Take care.
Learn more at HealthRangerReport.com.
Lab verified superfood and nutritional products for healthy living.
Export Selection