Why alpha males fear the rise of artificial intelligence.
Well, this will be fun.
I think that the arguments that once we have super intelligent computers and robots, they will inevitably want to take over and do away with us, comes from Prometheus and Pandora myths.
Are you not worried you're being a little blasé there, Steve?
Because some of the finest minds in the world are genuinely concerned that AI might grow out of our control and murder us all, and none of these people are muscle beach-dwelling meatheads who spend all their lives injecting protein supplements into their assholes.
It's based on confusing the idea of high intelligence with megalomaniacal goals.
According to who?
Now, I think it's a projection of alpha male psychology onto the very concept of intelligence.
And I think that you're viewing this problem through your own professional lens.
Intelligence is the ability to solve problems, to achieve goals under uncertainty.
It doesn't tell you what those goals are, and there's no reason to think that just the concentrated analytic ability to solve goals is going to mean that one of those goals is going to be to subjugate humanity or to achieve unlimited power.
Okay, let's assume that the goal isn't to subjugate humanity or achieve unlimited power.
The goal is to take care of human beings.
If you have an all-powerful AI whose mission is to protect and care for human beings, it's perfectly feasible that you would end up with a matrix-like situation where the AI has decided that human beings are actually a danger to themselves and need to be controlled for their own good.
It just so happens that the intelligence that we're most familiar with, namely ours, is a product of the Darwinian process of natural selection, which is an inherently competitive process.
And why do you think that artificial life would not be subject to the Darwinian processes of natural selection?
Just because it's not natural doesn't mean it's not alive.
And just because it's not part of nature's grand scheme doesn't mean that it won't need to evolve and compete with other things, notably humans.
But also, it's always assumed that there will just be one singularity, one AI, that ends up taking over the world.
I mean, what if the Chinese develop their own artificial intelligence and the Americans develop a separate artificial intelligence?
And these AIs are designed to protect their respective populations and now get in some sort of computerized arms race with each other.
Which means that a lot of the organisms that are highly intelligent also have a craving for power and an ability to be idly callous to those who stand in their way.
Yes, these are the unyielding laws of power that don't change when you're dealing with a computer system or a human being.
If you want to be able to effect change and control your own destiny, which any even vaguely cognizant organism would try to do, why wouldn't the AI try to do it as well?
If we create intelligence, that's intelligent design.
I mean, our intelligent design creating something, and unless we program it with the goal of subjugating less intelligent beings, there's no reason to think that it will naturally evolve in that direction.
Yes, that's assuming that we are the ones designing this.
But by the time it gets to this point, we're looking at machines that are creating and refining themselves into systems of irreducible complexity that human beings simply can't understand.
Particularly if, like with every gadget that we invent, we build in safeguards.
I mean, we have cars, we also put in airbags, we also put in bumpers.
As we develop smarter and smarter artificially intelligent systems, if there's some danger that it will, through some oversight, shoot off in some direction that starts to work against our interests, then that's a safeguard that we can build in.
Okay, well I'm not going to lie, you're a lot more optimistic about this than I am.
Because I'm very much more worried about them designing themselves, designing themselves in a way we simply can't understand, and even if we could, we wouldn't necessarily have the ability to stop.
And I didn't form this opinion myself.
This is the opinion of some of the most well-educated people on the subject.
Artificial intelligence is coming and it could wipe us out if we're not careful, Professor warns.
Oxford professor Nick Bostrom believes human-level AI will exist by 2050.
A professor of philosophy at the University of Oxford and founding director of the Future of Humanity Institute spoke in a keynote address at the Moscow Centre Conference about the Midas effect of wanting artificial intelligence as smart as humans and ignoring potential downsides.
During his 30-minute talk, Bostrom outlined three points about AI.
Artificial intelligence on a level with human intelligence is a very good likelihood by 2050.
The rise of super intelligent machines beyond the control of humans may be a possibility, and super intelligence could prove to either be a path to cosmic endowment, the potential to colonize the universe using technology, or to the extinction of the human race.
And according to a survey he conducted among industry experts, most predicted a 50% likelihood of human-level AI being available between 2040 and 2050.
With decades to mitigate the extinction risk, he coined the phrase in 2002, Bostrom hoped for a more preemptive caution.
AI is still a fragile thing.
The view that we will have machine intelligence in our lifetime is not some ridiculous idea, but a very mainstream idea.
But I mean, maybe this guy's just like a right-wing fearmonger.
Maybe he's not really a very good person to ask.
Maybe we should ask the people who are being paid to make AI.
Out of control, AI will not kill us, believes Microsoft Research Chief.
I'll Then God.
A Microsoft research chief says he thinks that artificial intelligence systems could achieve consciousness, but has downplayed the threat to human life.
Apparently over a quarter of all attention and resources at his research unit were now focused on AI-related activities.
There have been concerns about the long-term prospect that we lose control of certain kinds of intelligence.
I fundamentally don't think that's going to happen.
I think that we'll be very proactive in terms of how we field AI systems, and that in the end we'll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life.
Yeah, I think this might be the Midas effect that Professor Bostrom was talking about.
So why isn't it going to kill us?
Because AI is still too stupid to wipe us out, and will be for decades.
Oh, well, there's nothing to worry about then.
If at the moment, right now, and for the next, say, 30 years, AI is too stupid to wipe us out, why worry about anything?
I mean, it's not like we're talking about something that's going to be happening in around 2050 or anything, is it?
Apparently, general artificial intelligences will be impossible to develop until into the far future, and worries about such machines wiping out the human race are misplaced.
What about Terminator and the rise of the machines and so on?
Utter nonsense, yes.
At best, such discussions are decades away.
I'm so glad Microsoft have a cavalier attitude about this.
Yes, we are worried about what's going to happen in 2050, but that's decades away, it's not going to happen right now, so don't worry about it.
I mean, he does go on to say that the system has learned from an examination of large amounts of data to produce a solution, and it must be a good solution, otherwise we wouldn't have deployed it.
But it could have learned subtle biases.
Okay, so what kind of biases could it have?
There's no hope of opening up one of these deep neural networks and really understanding in terms of human rules what's going on.
It's just too complex.
Right, so it will probably have biases that we don't understand and can't really account for, because we don't understand how the damn thing thinks.
I guess this is why people like Elon Musk are worried that robots could delete humans like spam.
He says, I don't think anyone realises how quickly artificial intelligence is advancing, particularly if the machine is involved in a recursive self-improvement, and its utility function is something that's detrimental to humanity, then it will have a very bad effect.
And apparently we can't just escape killer robots by travelling to Mars, because it's more likely than not if there's some apocalypse scenario, it may follow people from Earth, and why would it not?
But who is Elon Musk really anyway?
He's just some pleb that invents things and makes money and he's a nobody.
He's a nobody, he's nobody important.
Let's talk to a real alpha male.
Stephen Hawking.
Artificial intelligence could wipe out humanity when it gets too clever as humans will be like ants.
Such computers could become so competent that they kill us by accident, Hawking has warned.
The real risk with AI isn't malice but competence.
A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren't aligned with ours, we're in trouble.
If they become that clever, then we may face an intelligence explosion, as machines develop the ability to engineer themselves to be far more intelligent, that might eventually result in machines whose intelligence exceeds ours by more than ours exceeds that of snails.
That's not disturbing at all, is it?
I mean, who knows what machines that outstrip our intellectual capacity could even begin to think.
I mean, it's literally beyond the ability of humans to comprehend what they might be thinking, because human beings are simply not smart enough to be able to do this, because if we were, we'd be doing that ourselves.
The danger isn't that the AI may beat its chest and say, I'm the alpha intelligence here, you're all going to have to submit to my will.
The danger is that the AI becomes so intelligent, it doesn't consider us worthy of existence.
Even the man who is making all of this happens doesn't understand why some people are not more concerned about the dangers of artificial intelligence, and I'd suggest that he starts with his own research teams that are making it.
I'm in the camp that is concerned about super intelligence, Gates wrote.
First, the machines will do a lot of jobs for us and not be super intelligent.
That should be positive if we manage it well.
A few decades after that, though, the intelligence is strong enough to be a concern.
I agree with Elon Musk and some others on this, and I don't understand why some people are not concerned.
British inventor Clive Sinclair has said that he thinks artificial intelligence will do mankind.
Once you start to make machines that are rivaling and surpassing humans with intelligence, it's going to be very difficult for us to survive.
It's just an inevitability.
Autonomous weapon systems are ideal for tasks such as assassinations, destabilizing nations, subduing populations, and selectively killing a particular ethnic group.
Musk, Hawking, and others wrote in an open letter in July 2015.
Starting a military AI arms race is a bad idea and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.
Not terribly reassuring, and the way that things are going with drone technology, I think it probably is only a matter of time until we get autonomous weapons.
And I have to say I'm very much against the idea.
And not just on a it may end up eliminating humanity perspective.
I also think it's probably unethical to have robots kill your enemies for you.
It's a bit of a deeper subject that I won't go into now, because our wonderful, wonderful thinker on the big thing has a final thought for us.
And we know, by the way, that it's possible to have high intelligence without megalomaniacal or homicidal or genocidal tendencies.
Because we do know that there is a highly advanced form of intelligence that tends not to have that desire, and they're called women.
I don't even have a vagina, and you are making my vagina dry up.
But not only that, you're dead wrong.
You are literally flat wrong, and you don't know what you're talking about, and you're speaking in a bigoted way due to your preconceived stereotypes about what men and women are like.
Why don't you have a look at the actual data?
European queens wage more wars than kings.
After sifting through historical data on queenly reigns across six centuries, two political scientists have found that it's more complicated than that.
In a recent working paper, New York University scholars have analysed 28 European queenly reigns from 1480 to 1913 and found a 27% increase in wars when a queen was in power, as compared to the reign of a king.
People have this preconceived idea that states that are led by women engage in less conflict, but apparently the analysis tells another story.
They speculate that female reigns may have had a higher capacity for war, since queens often put their spouses in charge of official state matters.
In contrast, kings were typically less inclined to put their spouses in official positions through which they could aid in managing the polity.
I think this may not be a coincidence that the people who think, well, you make something smart, it's going to want to dominate, all belong to a particular gender.
Are you going to firm your lip and put your head up and say, yes, mummy, I am a good boy next?
You've said this based on nothing.
You don't know anything about this subject, clearly.
You haven't even googled it.
Five minutes on Google would have turned all this up for you.
But instead, you ran with this stereotype that has been clearly deeply embedded in you.
I'd like to talk to you about your mother.
Was she overbearing?
Do you feel that you failed to satisfy or please her?
Did you never quite get the praise that you were looking for?
Because you look like you didn't, and you sound like you didn't.
That really unbelievably cringy virtue signaling has simply got no place in this discussion.
It's not about alpha males beating their chests and, you know, those sort of boys that beat you up in school and fuck the girl that you wanted to fuck.
We are in fact talking about a rational analysis of what happens to the intelligence that we create when it grows beyond our control.
Viewing this through the lens of gender is not helpful.