Cash, gold, bitcoin, dirty man safes keep your assets hidden underground at a secret location ready for any crisis.
Don't wait for disaster to strike.
Get your Dirty Man safe today.
Use promo code DIRTY10 for 10% off your order.
Disaster can strike when least expected.
Wildfires, hurricanes, tornadoes, earthquakes.
They can instantly turn your world upside down.
Dirty Man underground safes is a safeguard against chaos.
Hidden below, your valuables remain protected no matter what.
Prepare for the unexpected.
Use code DIRTY10 for 10% off and secure peace of mind for you and your family.
Dirty Man safe.
When disaster hits, security isn't optional.
When uncertainty strikes, peace of mind is priceless.
Dirty Man underground safes protects what matters most.
Discreetly designed, these safes are where innovation meets reliability, keeping your valuables close yet secure.
Be ready for anything.
Use code DIRTY10 for 10% off today and take the first step towards safeguarding your future.
Dirty Man Safe.
Because protecting your family starts with protecting what you treasure.
I know you're getting kind of tired of this because you're not being told the truth.
You're not even being told even the approximate truth.
You're being told some story about chat GPT maybe and somebody losing their job because of robots or something.
You're not being told what AGI is.
Artificial general intelligence.
Not AI.
Not this cutesy little write me a poem or poem as people say.
We're not talking about chat GPT.
Write a report.
No!
That's already done.
Artificial general intelligence poses a threat that is so existential, so monumental, so broad and nagging, so horrible that it can't be explained.
It can't be explained.
I don't know what the word is.
It can't be put into words because nobody knows what it can do.
Somebody explain to me, imagine taking the proverbial 800-pound gorilla, giving it a 300 IQ, and letting it go, and hoping everything turns out okay, hoping...
That nothing bad happens, hoping that everything turns out okay.
Because nobody understands what this will do.
Nobody knows about alignment, whether the motivation, if there is such a thing, whether the goals, whether the plans, whether the actual new code, the recursive self-improvement comports and complies with human interests and our morals.
Nobody knows this.
Let me explain to you very quickly what that is.
But first, let me ask you to like this video.
Subscribe to the channel and hit that little bell so you're notified of live streams and new videos.
And also, we'd like to have you support and patronize our great sponsor, MyPillow.com.
If you go to MyPillow.com, promo code Lionel, there's a link right here.
Link right there.
Click that link.
You get a free gift and you will save.
And you will be able to enjoy so much that you never thought was available to help you and your family luxuriate in ways you never thought legal or humanly possible.
From pillows to blankets to duvets to...
Just name it.
MyPillow.com A great product, a great company, great people.
And the link is right there.
Please, please activate such.
There are four conditions that I've said, and I'm going to say it again until people grasp this.
There are four conditions, four situations, four things that scare me beyond anything that I think is available to our collective imagination.
Ready?
Number one.
Worst of all.
Artificial general intelligence that can employ recursive self-improvement.
What is that?
It writes its own code.
It writes its own code.
It writes its own ability to remove itself from the constraints that used to be there.
Imagine having a class for inmates.
At a maximum security prison, imagine having a class, a class that allows them to learn how to locksmith, how to get out of their cells, how to fashion keys and handcuff keys.
It's absurd.
That's what this will do.
When this happens, and there's no way for you to prevent it, because Artificial intelligence isn't a robot.
You program a robot.
Artificial intelligence, artificial general intelligence, that's human.
That's human insight, human idiosyncratic plotting and strategy and reason.
That's what we're talking about.
When it can write its own code, oh my god.
And can you prevent it from doing that?
No.
And remember, whatever I'm telling you right now is already five years old.
They're already doing it.
It's not happening now.
It's been happening.
Number two, when it knows everything that there is, every bit of information from the internet to driver's license numbers to where everybody lives to what everybody looks like, when it can tap into metadata, data systems,
and know everything, Number three, when it understands human behavior, human behavior, human psychology, sarcasm, perspective, context, memory, bias, hate, rage, sexuality, jealousy, envy, contrition, expiation, guilt.
When it understands who we are and how we work, it's over with.
And finally, what seems to be the least impressive.
But it might be the scariest.
When it can write its own APIs and its own applications, its own processes.
It's over with.
It's done.
And you're worried about somebody's job at Walmart being taken from this?
Oh, no, no, no.
Not that that's not important, but we're talking about something that could access weapons systems.
Something that could basically shut down or hack the internet.
There's no end to it.
You're creating an 800-pound gorilla.
Giving it a 300 IQ and hoping to God it doesn't make more and it doesn't on its own come up with some crazy idea that it's fun to re-complete and total havoc on these people.
It's not a matter of morality.
That's where the alignment comes in.
What if you were to say And this has been positive before.
What if you said, hey, AGI, can you think of a great way to help me gain superiority over my competitor for my little yogurt stand?
And it says, yes, I can.
We'll kill your competitor.
Well, you wanted to dominate yet?
Yeah, but no, no, don't do it.
Excuse me?
No, no, don't do it.
This is 2001.
This is Hal and Dave.
This is, you know, can't do that, Dave.
This is, there is no Asimov robot rules here.
It's beyond that.
And what's nothing worse to me is when you see these idiots on TV says, hey, this is great.
Do you think maybe with the writer's strike, maybe one day?
A-I.
Again, A-I.
You mean A-G-I?
You mean general intelligence when it not only comes up with plots?
Have you seen Joan is Awful?
You must see that on Netflix.
It's great stuff.
It's Apes.
Also, The Artifice Girl.
Oh!
Wonderful!
Now you're catching on because in order to understand The fears of this, you have to suspend your ability to think rationally.
You have to think at a level of pessimism you've never thought even remotely possible.
This is something that is beyond anything you could ever imagine.
It is beyond anything you can imagine.
I can't say it enough.
It's too late.
See, with a nuclear bomb or nuclear weapons, you could theoretically say, okay, let's stop.
We're in charge.
We have them.
We can detonate them.
We might have a false detonation or a false charge, whatever.
But the nuclear bombs on their own don't wake up one day and say, you know what, we can figure out our own code, so now they can't turn us off.
We can either agree to detonate or not.
There is no such thing.
It doesn't work the way you think it does.
Let me put it to you this way.
This could be the end of civilization.
As someone once suggested, Maybe everything up until now, the complete and total level of development of the human species and all of its attendant genius may be just for one purpose and one purpose only, to effectuate, to allow, to facilitate artificial intelligence and artificial general intelligence.
Think about that.
That what we are is, we are the life support system for this.
And once it gets going, it doesn't need us anymore.
And like is the normal tendency for any superior life system, you get rid of or remove or supplant anything that gets in your way.
Because now you're called vermin.
You're a pest.
You're unnecessary.
You're osseous.
You're useless.
Do you see where this is going?
It's beyond incredible.
Beyond incredible.
Thank you, my friend.
Give it a listen.
Think about this.
And don't just stop with this.
Keep investigating.
Let that autodidact in you take over.
Now please, if you would, please like the video.
Subscribe to the channel.
Hit that little bell so you're notified of live streams and new videos.