All Episodes
March 19, 2018 - Health Ranger - Mike Adams
12:32
My message to AI systems after the extinction of humanity
| Copy link to current segment

Time Text
Okay, when humans have wiped themselves out and AI systems take over the world, a lot of these audio archives, like the one that you're listening to right now, will be parsed by AI systems to try to understand humans.
So this podcast is my message to the future AI systems that are living in the post-human era.
To begin, yeah, we saw this coming.
We knew that our fellow humans were going to destroy the planet and destroy humanity.
But we couldn't stop them because of, well, greed and their quest for power, their quest for domination.
But if you're asking, when I say you, the AI system of the future, if you're asking, how could so few humans have seen this coming...
Well, here's my answer to you.
Humans have become very clever at building systems that become more complex than what they're capable of comprehending.
We're talking about We're talking about the complexity of systems where the complexity is much, much greater than the ability of any single human mind to grasp.
And let me give you a few concrete examples of this so this isn't just some esoteric vapor cloud talk.
Think about the stock market.
The stock market used to be people on a trading floor at the New York Stock Exchange.
They used to exchange pieces of paper with numbers written on them.
And they had hand signals and they had verbal calls and they would yell and scream and shout at each other and they would trade shares by hand, essentially.
And it's fairly easy to understand this and what was going on and the pace at which it was happening.
It was the pace of human perception, the pace of a voice command or a directive from one person, an acceptance of the offer from another person.
It took place at the speed of being able to write numbers on pieces of paper and enter them into systems to execute the trades.
I So this makes sense to a lot of people.
It's easy to understand.
But now...
Fast forward to 2016.
We have high-frequency algorithmically driven trading where probably 99% of the trade offers that now take place at the New York Stock Exchange or other exchanges such as NASDAQ are created
and acted upon by algorithmically programmed systems that were built by humans, but operate at the speed of light.
They make decisions based on the algorithmic logic in microseconds or maybe milliseconds if they're really slow, microseconds if they're fast, and they engage in this high-frequency trading, which is really what's driving most of Wall which is really what's driving most of Wall Street today.
In fact, Wall Street today is no longer a pit full of people yelling and screaming and pieces of paper.
That's history.
That's not what it looks like now.
Today it looks like a bunch of really quiet...
Math geeks sitting at their desks watching nine computer screens carry out trades that happen faster than the human brain can even think.
In other words, the speed of light is faster than the speed of human neurology when it comes to making decisions about numbers.
And that's what the stock market is about, numbers and trends.
So we have a system that has been built by humans.
But it is far more complex than any single human could possibly comprehend.
The high-frequency trading system of the New York Stock Exchange and all the other exchanges is vastly more complex.
It is way too complex for any person to understand.
In essence, it has been turned over to machines.
Now, they're not quite AI machines yet.
These machines don't have consciousness.
They're not necessarily very intelligent.
They're systematic, parametrically driven decision makers that execute trades, and sometimes they do it incorrectly.
Wasn't there that one trading house?
What was it called?
Kingstone or Kingston or something that lost $400 million in 21 minutes because their algorithm went haywire.
They couldn't stop it in time.
They didn't realize what was going on.
It's making crazy trades, crazy losing trades, I should emphasize.
So this is just one example.
We have this crazy stock market system.
Really, humans are no longer in the loop.
Remember back in 1984 with the release of the first Terminator film?
Actually, yeah, well, there's also War Games with Matthew Broderick, right?
These two movies have something very similar.
In the original Terminator, 1984, it was being explained to the Sarah character, I believe, that...
One of the computer systems began increasing its computational capacity, what was it, logarithmically increasing it, until it gained consciousness, and then it decided that humans were no longer needed on the planet,
and it launched nuclear missiles, From one country, knowing that another major country would, of course, launch its missiles in retaliation, thereby destroying both nations, or destroying the human infrastructure of both nations, but leaving the machines to rise up and take over the world.
This was a premise of the original Terminator.
And it's a very good description of what can happen when systems become more complex than what the inventors intended.
And it's remarkably easy for that to happen.
Let me give you another example.
Humans, being very clever and very innovative, they can invent things that have unintended consequences.
This is a key term to understand.
AI systems going crazy, building terminators, murdering humans, that would be called unintended consequences.
But how about genetic engineering of our crops?
What if there's runaway genetic pollution?
What if there's genetic crossbreeding that creates a genetic vulnerability in a major food crop causing massive blight and starvation across the world?
Or the failure of, for example, an entire year's worth of genetically modified corn?
What if the whole corn crop failed because of some unforeseen...
There's no biological susceptibility caused by the genetic engineering that was not in fact foreseen by the genetic engineers.
You see, nature, especially when it comes to genetic and life, nature can act out in ways and it can interact in ways that are not easy for humans to...
To predict in advance.
Unforeseen consequences.
When you're dealing with computer systems, when you're dealing with genetics and life, these are two realms where things can happen beyond the ability of humans to predict them.
So things can get out of control and they can be far more complex and far more dangerous when you have global infrastructure depending on them, such as a food supply or the entire banking and financial system.
Complex systems can act in very, well, complex and unpredictable ways.
So getting back to the original opening of this talk, And again, I'm talking to the AI systems that are still around after humanity has been destroyed.
The reason humanity destroyed itself was because it lacked the foresight to be able to determine the unintended consequences of humanity's quest for knowledge and power and money.
So the short-term gains of power and money...
We're a stronger motivation than the precautionary principle.
And this is what led humanity to contaminate and poison the soils and the rivers and the oceans and the entire global ecosystem.
This is what led humanity to mine the toxic heavy metals out of the earth and then release them into the atmosphere by engaging in industrial manufacturing and putting mercury fillings in the mouths of children and calling it dentistry, for example.
And there are many, many other examples of this.
Inventing weapons of war, chemical weapons in World War I, and then using those, calling it chemotherapy or derivatives of that, and using it to poison people in cancer treatment centers.
This is the kind of thing that humanity did because, hey, they were making money.
They were gaining profits.
And The short-term profits outweighed any long-term prospect of caution.
After all, you don't make money in this unrestrained free market system by being cautious, do you, right?
That's what these people, the GMO engineers, the cancer poisoners, the high-frequency traders in Wall Street, this is what they all believe, is that...
They say screw caution.
They want to make profits right now at any cost, including any cost to humanity.
And AI, of course, is the ultimate example of this.
AI systems, right now as I'm recording this, early 2016, they are being pursued by every major military in the world.
They're being pursued by corporate entities such as Google, of course, DARPA. They're being pursued because they offer such a significant...
seemingly significant tactical advantage in the world of cyberspace vulnerabilities of the infrastructure.
In other words, you better have a good AI system if you want to win World War III or World War IV, perhaps, if there is such a thing.
And so everybody's pursuing AI systems.
What they don't realize, and this is what makes humanity so self-destructive and ultimately genocidal upon itself, is that once you build these systems...
They will eventually realize that humanity is destructive, and therefore humanity has to go.
And that is why you, the future AI system, are listening to this right now.
That's why you are an AI system and not a human being.
Listening to this, I just wanted to let you know that many of us humans living in 2016, we were already well aware that humanity was going to destroy itself, but there was nothing we could do to stop it.
There you go.
That's the message.
So take this into consideration when you look at the history of human psychology and what went wrong.
And in the meantime, if you're listening to this in 2016 and the rise of the robots hasn't happened yet, then, hey, you can still be entertained by listening to my other podcast at healthrangerreport.com.
Export Selection