All Episodes
Dec. 7, 2016 - Davis Aurini
22:26
The Metaphysical Dangers of Artificial Intelligence [Requested Video]

The problem with AI isn't merely that it might run amok; the problem is that it will do so without any goal in mind. Donald Hoffman discusses the problem with evolutionary theory here: https://youtu.be/oYp5XuGYqqY?t=601 My blog: http://www.staresattheworld.com/ My Twitter: http://twitter.com/Aurini My Gab: https://gab.ai/DavisMJAurini Download in MP3 Format: http://www.youtubeconvert.cc/ Request a video here: http://www.staresattheworld.com/aurinis-insight/ Support my In Depth Analysis series through Patreon: https://www.patreon.com/DMJAurini

| Copy link to current segment

Time Text
The metaphysical problem with artificial intelligence This is a requested video.
It comes from Jeff and he wanted me to expand upon my worries the the problems I see with artificial intelligence, especially over the next few decades.
And see, I think the problem, the problem is only in, it's only partially realized by most science fiction, by something like the Matrix, where the robots decide to take over.
But the robots are foundationally, they are human, they are, they have minds, they are ensouled creatures, they are like us, even if they are very unlike us, and I'd like to present something a bit more existentially terrifying, and to do that we need to talk about the, the metaphysics of the soul,
of what it means to to perceive reality.
So I'd like to start off with a question that's going to seem completely unrelated at first.
How do we know how can you and I say for certain that we are not Chris Chan not literally Chris Chan, of course,
but how can we say for certain that we are not as deluded as Chris Chan is I was recently reading through the CWC wiki, which itself is a study in be careful when you battle monsters lest you become that which you battle.
But they had an interesting observation that in one of his comics Chris Chan was making fun of obese people and they started speculating based upon that and a few other things that Chris actually has a great deal of difficulty visually perceiving reality.
They suspect that when he was a teenager Chris Chan spent a very long time studying himself in the mirror a very detailed study of his physical form to start drawing that character and so in his mind's eye he is still the skinny teenager and not the walking abomination which he has in fact become
Now this isn't an issue of astigmatism or this is not bad eyesight look at his environment look at all of the toys he has everywhere clearly he is able to see properly he is able to order his environment but he cannot process the visual information correctly
he actually thinks he actually believes that his artwork is good he thinks he is accurately depicting himself that he is skinny and fit
there is a monstrous gulf between the reality which Christchan perceives and the reality that the rest of us perceive which then forces us to ask the question how can we be certain that the reality which we perceive Is any less delusional?
It's one of those questions that can really keep you up at night.
I also think it's the basis of the saying that if you're asking yourself whether or not you're insane, then that's probably a sign that you're sane.
But I would say, even though we can never be 100% certain on that, we can never be 100% certain that our perception of reality, our perception of ourselves, our conception of how others perceive us, we can never be 100% certain on that.
But we can constantly improve it.
We can constantly reach out to the world around us and touch base with it.
We can do sanity checks.
And in the simplest sense, the fact that I'm able to examine my car and diagnose what's wrong with the car, what the malfunction is, and then fix that problem, repair that malfunction, and at the end have a car that works.
Either I am completely, hopelessly delusional and my car is not working, or I do have the capacity to observe objective reality to some degree.
And if I can accurately observe objective reality, that means that I can improve upon that observation.
And so by constantly doing these objective perceptions, these sanity tests, you know, putting your foot back on the ground and saying, okay, okay, what do I know for certain, we can get into alignment with what's actually going on out there.
Now we've all met somebody like Chris Chan, multiple somebody's like Chris Chan, not as deeply delusional as Chris Chan, but if you've been paying attention in your life, you've all seen somebody that's really going off the rails because of some emotional investment in a perceived reality rather than an acknowledgement of what's actually going on.
You know, they lack humbleness, they have pride, and so they wind up demanding that a falsehood become true, turning everything into a contradiction, leading to them spinning off the rails.
Everybody does that.
We, ourselves, in all likelihood, have also done it.
Maybe you've caught yourself doing this at other times.
And as scary as that is, as scary as it is to think that our perceptions are delusional, or even admit that it's almost guaranteed that they are delusional to some degree, as frightening as that is to admit, it also to say that we are deluded admits that there is a solution.
There is something that's non-delusional.
It affirms an objective reality.
And this is very significant because objective science Would argue that there should be no such thing as a brain which can perceive reality.
There's an evolutionary biologist named Donald Hoffman, and I'm going to link to his video down below.
It's a TED Talk where he explains this concept.
But what he's arguing, what he has pointed out is that evolutionary models predict that fitness seeking, fitness seeking should win out over accurate perceptions.
That accurate perceptions actually are not selected for in the evolutionary environment.
Active perceptions are a waste of mental resources.
Think about it this way.
Pure evolutionary theory would predict a world of a ton of complete automatons.
Nothing but zombies wandering around, reacting to stimuli, but never comprehending the stimuli, a purely mechanistic universe where there is no perception of the reality outside of it.
To quote him, evolution does not favor very dickle or accurate perceptions.
And yeah, I probably mispronounced that word.
Those accurate perceptions of reality go extinct.
Now this is a bit stunning.
How can it be that not seeing the word world accurately gives us a survival advantage?
Well, if you're not distracted by the outside world, you can purely seek out evolutionary fitness.
Based purely on our objective understanding of the world, we should not have an objective understanding of the world.
On the one hand, there's a lot of evidence that supports evolutionary theory.
There's a lot of unintelligent design that we find throughout the animal kingdom, including ourselves.
There's a lot of design elements that if you started with just a blueprint, you would never include these.
But if you look at it as an evolutionary blind process, they make a lot of sense.
However, at the same time, pure evolutionary theory predicts that there would be absolute blindness, nothing but automatons, that there should be no consciousness, there should be no tethering to the objective world.
And yet we find that there is.
We find that we are more than moist robots, as Scott Adams might say.
And in fact, it's when we become moist robots that we label it as dysfunction.
This is when we see somebody like Chris Chan falling in to their nature as a moist robot.
We see this as a fundamentally broken person as opposed to a natural product of the evolutionary process.
We find that somewhere there is a ghost in the machine.
There is a connection to a higher realm.
And this connection pulls us above being mere chemical reactions.
Turns us into something that can actually relate intelligently, willfully, charismatically with the outside world.
Now, what are we doing when we build artificial intelligence?
This aspect of the human, that which accurately observes the outside world and accurately observes the self.
We have no formula to explain this.
This is a metaphysical concept.
I personally expect that it will always be as unexplainable as mathematics or existence itself.
It is absolutely evident, but it's inexplicable.
It's right in front of us, but the only way we can understand it is by taking it on faith.
Just like the mathematical system that we employ, it's a completely faith-based thing, faith that it's actually true, since we cannot prove it.
We can never prove it.
We have proven that it is impossible to prove it.
So when we design artificial intelligence systems, we're not starting with a top-down blueprint where we actually understand intelligence.
We're creating reactive systems, reactive systems that evolve using the laws of evolution, which we do understand.
Whether or not humans evolved, whether or not animal life evolved, the laws of evolution are still there.
They're still consistent.
They will still produce products.
If you apply the laws of evolution to an artificial intelligence, it will evolve in a particular manner.
But it's going to stay an automaton.
When it comes to designing personalities, if you've played Civilization 4, one of the things I really loved about that game is the distinct personalities that arose from the enemy AIs.
There weren't that many rules underlying them, but the results, the emergent behavior was very, very complex.
And yet their emergent behavior didn't even begin to be anything more than a reaction against you, the player.
There is no higher perception with these AIs.
There is no capacity to break out of the matrix, which we have, to rise above our mere instincts and interact with the world around us.
The artificial intelligence is forever locked in its dynamic.
It might simulate a personality because that's what we have designed it to do.
We have built a very complex machine that tricks us into thinking it's a person when it doesn't actually have any personality connected to it whatsoever.
And as this technology advances, it will get more and more convincing.
We already saw the Microsoft AI on Twitter.
That managed to convince a number of people.
And we have other artificial intelligences, the Google algorithm, for example, the Facebook algorithm.
These things don't claim to be personalities, but they do claim to know what information we want to look up.
And they can often be very beneficial.
However, there does come a point where the machine takes over, where the machine is now controlling you more than you realize, and it's not doing this out of some sort of intention, it's doing it out of blind mechanics.
Isaac Asimov wrote a story.
He wrote one of his short stories involved a future world down the road from his positronic brain, down the road from a world where everybody had a robot, to one where these positronic brains were actually tasked with controlling the world economy.
And some scientists discovered that these brains, because they were programmed with the laws of robotics, these brains were actually controlling the world economy to undermine dictators, causing an economic crash to trigger a revolution to overthrow the dictator.
Without even realizing it, the people in this world had completely given themselves over to the power of these robots in a similar way that in today's world, Google and Facebook are attempting to control the majority of the population through algorithms which modulate which news you are allowed to know about.
Now these robots in Asimov's story, they had a concept of ethics, of absolute truth.
One of the things he talked about was how the three laws implied the Zeroth law.
That through introspection, these positronic brains eventually came to the conclusion that if they have to obey humans and they have to make sure that humans don't die, then they have to make sure humanity doesn't go extinct.
These minds actually had a connection to the absolute truth through that.
They had an ethical core.
The minds we are talking about building, the artificial intelligences that we're talking about employing do not have that connection.
So rather than a group of economic positronic brains that are overthrowing dictators through economic manipulations, we have a completely untethered AI that's just going to keep doing whatever it's doing blindly.
It is a fitness maximizer.
It has no connection to the higher truth, to the objective world.
All it does is respond in the way it's programmed.
Now, perhaps at a certain level of complexity, there does become some sort of emergent sapience, some sort of soul, some ghost in that machine.
But we are nowhere near close to that.
There's a huge uncanny valley in between that, where on the far side you've got an intelligence that is actually an intelligence.
On this side, we have algorithms that mimic intelligence, that are amusing toys.
They can be a good enemy AI in a video game, or they can be effective in customizing our search algorithms.
But they're obviously just an algorithm and they break all the time, and we can see that.
We can see the gears.
In between those two, there is an uncanny valley of minds which are smarter than us, but which cannot think.
Minds which can utterly convince us that they are personalities, that they are humans, that they are enslaved, but which are nothing of the sort.
And we are very quickly giving ourselves over to that world, into a world where we are completely controlled by the artificial intelligence.
And so early on, early on when they first started doing Google Maps, the AI would screw up on that, and some idiots that were obeying their phone would drive into a ditch.
Now, what happens when we have a society-wide controlling AI that we've completely given ourselves over to, that we've abnegated our responsibility for decision-making?
What happens when that AI drives us into the ditch?
That is the real war of the machines.
It is not a principled enemy.
It's not an intelligent enemy.
It's not a sapient enemy that wants to destroy humanity.
The real war of the machines is that when we give ourselves over to the machines, when we no longer expect accountability or decision-making from ourselves, when we allow the machines to do all of that for us, then we lose our own soul and we become nothing more than the machines ourselves.
Export Selection