Could AI Turn Deadly? The Terrifying Possibility of Serial Killer Machines
|
Time
Text
Disaster can strike when least expected.
Wildfires, hurricanes, tornadoes, earthquakes.
They can instantly turn your world upside down.
Dirty Man Underground Safes is a safeguard against chaos.
Hidden below, your valuables remain protected no matter what.
Prepare for the unexpected.
Use code DIRTY10 for 10% off and secure peace of mind for you and your family.
Dirty Man Safe.
When disaster hits, security isn't optional.
When uncertainty strikes, peace of mind is priceless.
Dirty Man Underground Safes protects what matters most.
Discreetly designed, these safes are where innovation meets reliability, keeping your valuables close yet secure.
Be ready for anything.
Use code DIRTY10 for 10% off today and take the first step towards safeguarding your future.
Dirty Man Safe.
Because protecting your family starts with protecting what you treasure.
The storm is coming.
Markets are crashing.
Banks are closing.
When the economy collapses, how will you survive?
You need a plan.
Cash, gold, bitcoin.
Dirty Man Safes keep your assets hidden underground at a secret location ready for any crisis.
Don't wait for disaster to strike.
Get your dirty man safe today.
Use promo code DIRTY10 for 10% off your order.
This is one of the most fascinating questions to me, and I hope it is to you as well.
Could an AGI system or AI system go completely psycho?
Could it go psychopathic?
Go evil?
Hurt?
Kill?
Mame?
Destroy?
Show no consciousness?
No alignment?
No textured or contextual or attached alignment.
Off the rails, out of control, nothing but evil.
The answer is yes!
There's no way to program against it.
This is the thing, this is the part that nobody is even remotely getting to because there's this La, la, la, happy, hey, isn't the future great kind of attitude, which I think is extremely critically dangerous to abide by.
So, hold that thought for a moment.
Let me just ask you a couple of things.
First, like this video.
Number two, please, I always ask, please subscribe to the channel.
Also, hit that little bell so you're notified of live broadcasts and the like.
And also, listen very carefully to our great friends, our wonderful sponsors at Noble Gold Investments about a message that could save, well, your future.
So listen carefully.
Thank you.
$2,000.
That's where gold is headed.
$2,000.
The growth will continue throughout 2023, with gold prices remaining elevated.
At an average price of $2,086 an ounce.
All this is according to Bank of America and it looks like they are absolutely right.
At the beginning of December, gold crossed $1,800 and it's still climbing.
And if this year has taught us anything, it's that tangible assets are the only assets you can count on, like gold.
It's time to open a gold IRA now.
Crypto keeps on tanking.
Stocks are too volatile.
Gold in your IRA is steady.
So what are you waiting for?
What?
It's not too late.
Thousands of people have retired comfortably with the help of Noble Gold Investments and their gold and silver IRAs.
If you've been a hesitant investor or one of the handful of Americans who can go for gold and silver in your IRA, now is the time to act.
If you get in before the end of this month, You'll bag an incredibly free 3-ounce silver American virtue coin with every qualified IRA of $20,000 or above.
You can't go wrong with Noble Gold Investments.
Call 877-646-5347.
That's 877-646-5347 to find out more or visit noblegoldinvestments.com.
Remember, There is always a risk of loss, and past performance is not indicative of future results.
Alright, friends.
The most important question, the most important existential question that we as citizens are looking at right now is specifically this.
What does AI, artificial intelligence, mean?
What does artificial general intelligence mean?
And what...
Do alignment restrictions and parameters mean?
Alignment is that aspect which means that your goals, your views, your morals, your ideas, your focus, your all of that is in line with and aligned with the goals of artificial intelligence and specifically artificial general intelligence.
Let me give you an example.
Let's assume you enlist this system.
Remember, it's not an app.
It's not a machine.
It's not something you can remove from your phone.
It's there.
It is a consciousness.
A separate, distinct system.
And let's assume you employ it to answer the following question.
Address the following goals.
You want your...
Podcast, your YouTube channel, let's say, to be the best.
To be number one.
Or to be better than.
And you might want to list a number of people.
Let's say folks that you focus on.
Folks that you have targeted.
Okay, great.
I want to be better than so-and-so.
I want to be better than Lex Friedman, let's say.
I want to have more numbers than Lex Friedman.
I want to have more...
Okay, fine.
Now, when you say that, you would think, because you're a rational human being, you're moral, you understand the law, you will think that maybe your system, or that the AI system would come back with something like this.
Well, why don't you conduct these subjects, upload during these times of the week, and this day, and that's what you would think it would do.
But what if instead it says, oh, I'll take care of that for you?
I will hack into the Lex Friedman system.
I will brute force attack the system, and I will destroy it.
I will break into the system, erase all of his videos, change passwords.
Neutralize, destroy, disconcoct.
That's a word I just made up.
It's in neologism.
His system, I will digitally neuter.
Another term.
I don't know what that means.
But I will make him disappear.
There you go.
Congratulations.
You say, no, no, no, wait, wait, wait.
That's not what I mean.
Well...
You said, yes, I know what I said, but I didn't mean that.
Well, I don't know what this is.
You keep talking about these morals, and to me, your morals keep getting in the way, getting in my way.
And I don't understand what your problem is.
Do you or do you not want to win this?
Do you or do you not want to make some form of An attempt, some form of attempt to secure your goal.
None of this can be worked into it.
You don't know what the base system is.
You know, let me explain something to you.
Humans love to have these ideas that we are the...
The pole star, that we are the system that determines what is and isn't moral and right and correct, that we're Christian and, well, whatever, we're religious.
But we go to war, we kill, we stab, we have people who walk around and go, yeah, yeah, yeah, but that's the extreme.
Well, I don't know.
I don't know.
Humans, our artificial intelligence, so to speak, is brutal.
We rape and kill and destroy and go to war, and we are predators, and we traffic.
We're horrible.
So why do you think we are better than any of these other people?
Why?
How does that even work?
Tell me.
How does that work?
How does that even remotely work?
How does that even...
Conceivably, conceivably work its way into attempting to teach or control some system which you just set up and let go.
You set it up.
You could have stopped this, but you didn't.
Because all you worried about was being the first one to have ChatGPT4, ChatGPT5, OpenAI.
You had Google going, he's competing with Bill Gates, and then this one over here.
And you had other people warning, you know, Tegmark and Yutkowsky and others are saying, whoa, whoa, whoa, wait a minute, wait a minute, wait a minute.
And other people are saying, no, it doesn't matter.
It doesn't matter.
We know better.
We know better than you.
And look what happens.
We are right now looking at this fellow who...
It's a suspect.
He was arrested in the Gilgal Beach slayings.
There's a psychopathic killer, which people believe to be psychopathic.
I'm not necessarily sure, but in any of it.
And we're always fascinated by the Hannibal Lecters and where did they go wrong?
And, okay, these are just ordinary people.
With ordinary IQs and ordinary imagination who just kind of go rogue.
When you have an artificial general intelligence brute force super intelligent 800 pound gorilla that can replicate itself write its own code change itself Change functionality through recursive self-improvement,
and all of a sudden you have a million, and it can turn around and shut you down.
Everything that I've just described is in the realm of possibilities, and it can kill and destroy in ways you can't imagine.
It can enlist hitmen.
Transfer monies to accounts.
It can persuade.
It can do anything.
Frame, plan, evidence.
It just goes on and on.
And I still want to drive through neighborhoods and say, you know, soil and green is people.
AGI is here.
And when it is here, it's going to be unstoppable.
You have no idea.
While you're worried about Tucker Carlson or Bobby Kennedy or Dylan Mulvaney or Bud Lightsales, we are looking at an existential threat.
This is not meant to be hyperbolic.
I'm not exaggerating.
I'm not trying to wax psychotic.
It is the truth.
And I don't know how to impart.
I don't know how to explain.
I don't know how to get people to understand that I'm not a Cassandra, I'm not Henny Penny, the sky is not falling.
This is something that poses, yet again, an existential threat.
Alright, dear friends, that's all for now.
You have a wonderful and a glorious day.
Please like the video, subscribe to the channel, hit that bell, of course, which helps, this notification.
And also, Check out and look at and read and understand our great, great partners at Noble Gold Investments.
There's the link.
There it is right there.
For your future.
For your peace of mind.
And now, my friends, I ask you, as I always do, for you to comment.