All Episodes
Nov. 26, 2023 - Real Coffe - Scott Adams
01:29:12
Episode 2304 Scott Adams: CWSA 11/26/23 Intelligence Is An Illusion, AI Proves It. So Does The News

My new book Reframe Your Brain, available now on Amazon https://tinyurl.com/3bwr9fm8 Find my "extra" content on Locals: https://ScottAdams.Locals.com Content: Politics, DeSantis Campaign, NYC Thomas Jefferson Statue, Business Insider, President Trump, Israel Thai Workers, Border Migrant Pronouns, Ukraine War Origin, Newsweek Propaganda, X Media Matters Lawsuit, Elon Musk, Dunning-Kruger Effect, Intelligence Illusion, Zombie Scientific Studies, Relationship Advisors, ChatGPT Believes Hoaxes, AI Alternative Theory Censorship, SuperPrompts, Brian Roemmele, Nivi, Scott Adams ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you would like to enjoy this same content plus bonus content from Scott Adams, including micro-lessons on lots of useful topics to build your talent stack, please see scottadams.locals.com for full access to that secret treasure. --- Support this podcast: https://podcasters.spotify.com/pod/show/scott-adams00/support

| Copy link to current segment

Time Text
Do-do-do-do.
Do-do-do-do-do-do-do.
Good morning everybody and welcome to the highlight of human civilization and possibly even the civilizations that They built the pyramids.
Whoever they were.
If you'd like to take this experience up to levels that even Elon Musk can't reach with his best rocket, all you need is a cup or a mug or a glass, a tank or a chalice or a stein, a canteen jug or a flask, a vessel of any kind.
Fill it with your favorite liquid.
I like coffee.
And join me now for the unparalleled pleasure of the dopamine, the day, the thing that makes everything better.
A little bit of serotonin today.
It's called The simultaneous sip, it happens now.
Go!
Oh, that's good.
Savor it.
Savor it.
All right, good.
I was worried that some of you were gonna just gulp and not savor, and that would just... What a way to start the day, huh?
All right, let's talk about all the news.
I do have a theme which will emerge pretty soon, and the theme is...
Intelligence is an illusion.
That's right.
Intelligence is an illusion.
We'll see if we can find that theme as I go through the stories today.
I think you'll find it.
My favorite story of the day goes like this.
I'm no political consultant, but if I were, I would say to you, if you're running a campaign, and the biggest topic about your campaign is the question of whether Roger Stone called your spouse a four-letter word that starts with C and ends with T, and that's the only news you're making.
That's Ron DeSantis' situation.
Ron DeSantis, the only story editor of his campaign this week, Is it Roger Stone may have called his wife a C word?
He denies it.
He says he only said, see you next Tuesday.
So I guess we can believe him and maybe he just plans to see her next Tuesday.
If you know what I mean.
All right.
Well, that's pretty funny.
So if I can teach you one thing, it would be That's the time to equate your campaign.
If he's looking for a sign, can you imagine DeSantis praying for guidance?
God, I'm trying to decide whether I should stay in the president race or should I get out?
Can you send me a sign?
Let's see what's happening on Next today.
Roger Stone saying some things about Okay, it's time to quit.
It's time to quit.
I'm out.
I'm out.
So, that would be the religious interpretation.
All right.
In the category of President Trump is right again.
I was watching the Amuse account.
That's the name of an account on X. Amuse.
And Trump has told Democrats that If you keep taking down statues, pretty soon they're going to come for Thomas Jefferson.
Do you remember when you thought, they're not going to come for Thomas Jefferson?
Well, the Thomas Jefferson statue has been removed from City Hall in New York City.
So it's gone.
Now, I'm perfectly in favor of removing it, but I think there's going to be a big sort of gap where there should be some artwork.
Is anybody with me?
Should it be George Floyd that they put back?
I mean, that makes sense, right?
Because as we've learned, Thomas Jefferson was a horrible, racist piece of shit.
But not George Floyd.
George Floyd would be more of a hero situation.
So, everybody?
George Floyd statue?
It's unanimous.
That's what we'd like to see.
Meanwhile, Business Insider is running an article, the title of which is, here's what happens if Trump dies while running for office.
And then they repeat, dies while running for office, over and over again in the article.
Do you know what that looks like?
Well, I don't know, but it looks like a murder attempt.
To me.
Because if this reporting And this is just speculative, but if this were influenced by any intelligence entities within the United States, it would be for the obvious purpose of triggering the actual assassination.
So this is the copycat problem, except without the copying.
So the copycat problem with serial killers is that I don't believe anybody would even have the idea of shooting up a school except that it's in the news.
Like who would even have that idea?
Right?
So these are clearly things that the news prompts.
So knowing that the news can prime you and prompt you into doing something you wouldn't have even thought of doing, What happens if you start running a whole bunch of stories about what would happen if somebody took a shot at Trump?
It's guaranteed.
If you tell that often enough to a large enough population, it's not a maybe.
Does everybody get that?
If the population is large enough and you repeat that message enough, even though you're not encouraging somebody to do it directly, just putting the idea into that many people's heads largely guarantees it happens.
Or at least somebody tries or thinks about it really hard.
So that's pretty close to a murder attempt.
Now if this was just an idea that the reporter or the journalist had, well then it's not.
Unless the journalist was intending to make it a murder attempt, but there's no way to prove that.
To me it looks like not an organic story.
No way to know, but to me it looks like a story that somebody in the blob, you know we call the blob whoever's running the country from the intelligence and media and Democrat side, it looks like they To me it doesn't look organic, but it's hard to know.
Just a guess.
Well, there's this hostage release situation happening over in Israel, Gaza, and there's reason to believe, we're told by Jake Sullivan, National Security Advisor, that at least one of the three Americans might be part of the ones released.
I kind of hate that Hamas has us playing this game.
Where, you know, you have to think about and worry and speculate about which hostages get released.
I feel to some extent that's sort of playing into their terrorism model.
That it makes you think about it and get terrorized again.
Because if you're hoping your loved ones get released, but then they're not released, that would be like terrorism all over again.
It's like they're milking this frickin' thing as hard as they can.
Now, I'm not saying there's an alternative.
Honestly, if I were in charge of everything that Israel is doing so far, I would do it exactly the same.
I have not personally, you know, I'm no military expert, no expert in the area, but when I watched from the beginning to the end, from October 7th on, I didn't see anything I would disagree with in the way that they would handle it.
And what I mean by that is it's the way any country would handle it if they had the resources.
So the reason that I don't criticize them is that literally anybody would do the same thing they're doing.
How do you criticize that?
You can make some kind of hypothetical, philosophical, morality-based argument.
But the truth is, 100% of nations which exist today, if they had the resources, and they were in that situation, would respond the same way.
So, criticizing it just seems like an absurdity to me.
It's more like cause and effect, and just watching it, basically.
So anyway, that's moving forward.
CNN reports, and I'm going to give CNN a little credit here.
When there are stories, international stories, that don't have a specific political component to them, they seem pretty good.
There's a lot of stuff that CNN gets right, just not if it involves Republicans.
As long as there's no Republican in the story whatsoever, they actually do a decent job of collecting news.
So one of the stories here is that And I didn't know this, so this is brand new news to me, that a lot of the workers on the farms in Israel were, of course, Palestinians.
But there weren't nearly as many as there used to be because of past history.
Apparently most of the workers, or a lot of them, a ton of them, were Thai.
So they're from poorer areas in Thailand.
And apparently 30,000 to 40,000 workers are now, quote, missing.
They're missing, meaning that they didn't go to work.
But apparently the Thais were massacred on October 7th.
So Hamas didn't care what your nationality was, they were just killing people.
So they killed a bunch of Thai workers.
And the Thai workers said, no thank you.
I'm sure there's another world, there's someplace else we could work.
And so they left.
So they're not missing missing, they probably just went home.
And so it looks like the crops, there's not enough workers to pick the crops and, you know, it takes some skill to milk a cow.
They said you had to be highly skilled to milk a cow.
Has anybody ever milked a cow?
I mean, I realize that they're using milking machines, but may I teach you everything you need to know about milking a cow?
So you bring the cow in, You put it in its little stall so its head is immobilized.
They like to be milked, apparently, because they're just used to it.
Then you take one of the teats.
Yes, that's what they're called, the teats.
One of the four.
And you have to, I'm not going to do the impression of it on livestream because it'll turn into a meme, but imagine somebody shaking a banana.
You do that a little bit, and it's called priming.
So you have to, you have to prime each of the teats by hand and make sure that it's producing.
And then you take the little suction thing that will be the automatic milker and you replace your hand with the milker and it milks that cow.
It's not really that hard.
I mean, I'm pretty sure I learned it completely when I was eight years old.
Took about five minutes of training.
So I don't know what the skill exactly is.
I guess maybe maintaining the milking machines or something.
There's probably some skill in that.
But anyway, there's going to be way too much milk in the cows and way too much food in the fields.
But separately, we heard stories, Joel Pollack was reporting on this, that a lot of the citizens, Israeli citizens, were chipping in, trying to get the harvest picked, but there won't be enough of them, I don't think.
All right, in another story, Charlie Kirk is talking about how there's a new memo for the border, the American border.
Joe Biden's DHS patrol have now been trained that they must use gender-neutral language with the immigrants coming in.
So they can no longer just assume he, him, she, her, Mr. and Mrs. until a certain migrant goes by one of those pronouns.
So you can't call the migrants by a regular pronoun until you're really sure what's going on.
It's called the Guide to Facilitating Effective Communications with Individuals who Identify as LGBTQI+.
I?
What's the I?
LGBTQI?
No, not idiot.
Intersex?
Irrational?
No, it's not irrational.
Stop being that way.
Insane?
No.
Stop it.
Stop it.
Incels?
No.
Well, it could be.
Is it incels?
Incest?
No, but that's a good guess.
Illegal?
No.
Stop it.
Indigenous?
No.
These are terrible guesses.
You're terrible.
Icky?
No.
You're horrible people.
Indifferent?
Invader?
My God, you're horrible people.
You're the worst people I've ever seen in my life.
I've never seen such worse people in one place.
Iranian?
No.
Ignorant?
No.
Irrelevant?
No.
No.
Imperialist?
No.
Itchy?
No.
Impotent?
No.
These are terrible guesses.
No, not imaginary.
Not igloo.
No!
No, you idiots!
It's not idiots either.
Insect, that's terrible!
You're all awful!
You're monsters!
You're monsters!
I'm disgusted by you all.
Iguana is the correct answer.
It's iguana.
I think it's intersex.
What is intersex?
It's not inbred.
It's not inbred.
Until inflatable.
Inflatable.
It's not ignoramus.
No.
It's not impotent.
Stop guessing.
It's not impotent.
It's not iffy and it's not illusion.
It's not infected.
It's not infected.
It's not.
You're monsters.
Every one of you.
You're just assholes.
You're all terrible.
Inoculated.
Inoculated.
Stop it.
It's not irrelevant.
It's not intolerable.
Oh, you're all assholes.
Every one of you is an asshole.
And it's not funny.
If there's one thing I can tell you, it's not funny.
Now, this is where I have to stop and explain humor to anybody who's humorless.
Is there anybody here humorless?
It's funny because it's inappropriate.
Impenetrable.
Insecure.
Stop it!
Will you just stop it?
These are too funny for me not to read them, but it makes me sound like a terrible person.
I'm not terrible.
They really shouldn't put I in the end of that without defining it.
I didn't realize there were so many insulting words to start with I. Oh God, that was unexpected.
Alright, so that's what's going on at the border.
I didn't see if there was any news about massive migrant caravans, but we covered the pro-down situation pretty thoroughly.
Oh my god.
Anyway, speaking of ridiculous things, Glenn Greenwald continues to report and post on X about what he calls, you know, the continuing, now a mountain's worth of evidence, he says, that Russia and Ukraine were close to a deal at the start of the war to end it in exchange for Ukraine's neutrality, not entering NATO, but allegedly and reportedly
The story that's developing suggests that Biden and Boris Johnson blocked it, insisting that Zelensky go to war and win.
Do you think the history will decide that's what happened?
Is that going to be how history covers this?
I feel like maybe even if it's true, the history would never explain it that way.
I feel like history would go big Say, well, there are these long-standing issues, and so there was a disagreement.
I don't think it would ever get down to, well, there's this guy named Joe Biden.
He was known not to be mentally competent at the time, and Boris Johnson was his trained monkey who would do whatever Biden wanted.
So Joe Biden's defective brain decided to start a world war for no particularly good reason or no gain whatsoever.
And Boris Johnson, who has a bird's nest for a haircut, decided that he'd go along with that, and blah, blah, blah, World War III.
I feel like that's how history is going to cover that.
But, if this is true, that Zelensky was sort of talked into it, that would, at the very least, that debunks the concept that Zelensky was blackmailing, Because didn't you think that the real story here is like, why is this happening at all?
I mean, the only explanation is that Biden's being blackmailed by Zelensky.
But it could be the opposite.
It could actually be that Zelensky is being blackmailed, effectively, or bought off by Biden.
The blackmail might have been in the other direction.
Now when I say blackmail, I mean, Biden could have threatened directly or indirectly to remove Zelensky from office because you figure we could figure out some way to do that.
And otherwise we could make him rich beyond his wildest dreams if he survives the war.
I think Biden bribed Zelensky.
Or maybe it was both.
Maybe they bribed each other and it was a tie.
I'm going to bribe you to give me money to fight this war.
You can't bribe me.
I bribe you.
I want you to fight this war.
Here's some money.
No, you can't give me money.
I'm bribing you.
No, I'm bribing you.
No, I'm bribing you.
So I think that's how history will cover it.
Who's bribing who?
No, I'm bribing you.
So let me ask you this.
How do you think AI We'll cover that history.
Is AI going to say, well, according to Glenn Greenwald, what we had here is Biden wanting a war at any cost and forcing Zelensky into it.
Will it say that?
Or will it say, there are longstanding problems and You know, NATO expansion and Putin wanted to have a defensive zone around Russia.
Geopolitical situation.
Is it going to look like that?
And people have various claims about who's right and who's wrong.
But I'm an AI so I can't tell you who's right and who's wrong.
I think that's what it's going to look like.
Here's a fake news update.
So President Trump went to some football game and Was it North Carolina or South Carolina?
One of the Carolinas.
He shows up and the crowd goes, South Carolina?
South Carolina.
And the crowd goes wild with cheers.
Now, if somebody running for president shows up to a gigantic stadium, I mean just a massive stadium, and it's ear-shattering cheers, how would Newsweek cover that story?
So, candidate for president shows up in a huge American place.
Huge cheers, like deafening.
Here's how Newsweek covered it.
Trump was greeted with loud boos in South Carolina.
They actually reported it as boos.
There's video and audio.
You can actually play the video and audio.
Of the actual event.
Multiple camera angles, multiple cameras, multiple times.
They're all cheers.
Well, let me put that, I'm sure there are boos in the mix, but overwhelmingly it's cheers.
Now here's the question.
Is it really cheers?
Or did AI get a hold of it?
Is it edited?
How would you know?
Were there times when he was booed, And they didn't show you that?
How would you know?
Because we're right at the cusp of not being able to believe any audio or any video.
Aren't we already in that territory?
Are we just before that?
Or are we already there?
Where you see a story like this and your first reaction should be, I don't know.
At this point, I'm very close.
This one convinces me it's true.
Might not be, but I feel persuaded even if I'm wrong.
But I feel like, certainly within the next year, a story like this, I'm going to say to myself, you know what?
Even if there are five different videos of the event that show the same thing, they could all be AI.
Because AI could create that.
With just a text description.
My understanding is that the new Google AI will let you create images from text.
So we're at the point where you could say, show me a video of Trump arriving at a big stadium to booze.
Boom.
Show me five different scenes or videos that look like from the same studio, but maybe at different times.
Each time he's getting cheered or booed.
And it just creates it, and you're done.
And it would look just like the real thing, and they even have the right number of fingers now.
So, I saw there's a product now, a extra finger, so you can add a fake finger to your real hand, so it looks like you have six fingers, in case you rob a bank, and they catch you on video, and they say, we gotcha.
You can say, do you have me?
Look at the number of fingers.
Aha!
Obviously, AI generated.
Okay, well, AI is no longer creating extra fingers.
They've already fixed that.
So that wouldn't work, but it's a clever idea.
Anyway, so yeah, so you're gaslighted so badly that they turned wild cheering into booze.
They actually tried that.
It's gonna work.
Because if you only read the headline, And you never saw the video, which would be most people?
Don't you think most of the people who see the headline will never see the video?
Yeah, that's pretty good propaganda right there.
All right, in the case of Elon Musk versus Media Matters, the group that convinced the X advertisers, the advertisers on X, To stop advertising.
Then that costs, of course, Musk a great deal of money.
And so he's suing.
Now, in order to make your case for some kind of defamation, or there's another name for it, which is interfering with business in an illegitimate way.
What's the name of that?
There's a legal... No, it's not libel.
It's like business.
It has business in the name.
It's not a generic name, it's like a specific business malicious thing.
Business interference.
Is it business inter... not restraint of trade?
I think it's some kind of business interference is part of the thing they can charge.
Anyway, not charge, but that could be the claim.
So here's what makes Musk's case unusually good.
Number one, you have to approve actual damages.
Typically, that's really hard to prove.
If somebody just says bad stuff about you on social media, well, maybe your social media traffic went down a little, and maybe it didn't.
It'd be hard to know if you were injured.
So in that case, it'd be really hard to press a case, because they'd say, well, what's the dollar amount of this injury?
And you'd just say, I don't know, I think there is some.
You would lose that case.
But in the case of advertisers who were advertising and then they stopped, you've got the cleanest argument you could ever have.
We used to make this money, Media Matters did this stuff, and now we don't make that money and it's going to be in the at least tens of millions.
So they can easily prove the damages part because the companies that stopped advertising, they said directly and publicly why they stopped.
So that's all you need to know.
So that part looks easy.
Now you have to prove that they did something really skeevy, really, let's say, dishonest, even if it's not illegal.
They don't have to violate the law necessarily, because this is a civil case, but they do have to do something so weaselly and, let's say, illegitimate, That there's no doubt that they were trying to do it with the intention of damaging the entity.
And it looks like that's going to be easy to prove as well.
Because the people involved have lots of body of work saying exactly what they're trying to do.
So there's no question that they're trying to take down Elon.
And they're trying to take down X. So I think it's easy to demonstrate intention.
So you've got intention.
And you've got damage.
And then the next thing you have to prove is the super weaselly deceptive behavior in their attempt to be malicious.
And it turns out that because things are tracked and logged within X, That X knows exactly, they would claim, how the claims were made and how malicious and weaselly they were to make the claim.
So here's the best I can explain it.
If you were a normal user on X, The odds of you having your content or any advertisement that you saw paired with an ad where you see some neo-Nazi stuff and then you see some like Apple computer stuff, the odds for a non-Nazi to see that stuff is like vanishingly, you know, millions to one.
It's like impossible.
So that actually, if you look at the actual ability of X to avoid pairing those things, it's sensational.
It's not just good.
It's like insanely good.
Like, it will really, really do a good job of making sure your ad doesn't show up next to bad content.
Like, really good.
Like, better than anything that's ever worked in any domain, that's how good it is.
Like, it's not perfect, but it's like a, you know, one in millions before you'd see something like that.
So how did Media Matters produce something that would be so rare?
Well, first of all, they made sure that they used existing accounts, because if you created a new account to do some testing, X would immediately know their new accounts, and it would treat them differently.
So they first had to get existing accounts.
And then they had to follow the worst things they could follow, the worst content.
So they had to be people who were clicking on bad content.
What's the first problem you see?
The algorithms are individual, right?
The algorithms are not serving everybody the same thing.
They're individualizing.
So if you searched for a bunch of Nazi content, it might serve you some more accidental Nazi content.
But here's the thing.
Wouldn't you want it?
If you're searching for it, and you're interacting with it, you probably want it.
So if somebody wanted that content, and it ended up being paired next to a computer company that they also wanted the content, they wanted the product, they might be more inclined to buy it.
But it would be the weirdest, weirdest individual case, and nothing to do with X in general.
It was just one person pursuing an interest, and the algorithm helped him.
And even helped him find a computer to buy.
Everybody wins.
So they pretended that they were Nazis, and they just kept clicking on bad content until the algorithm said, oh, I guess you want more of this.
But that wasn't enough.
They had to continually scroll so that you would have enough situations of ads and bad content until finally you could get an ad and a bad content to line up.
And then you take a screenshot and you sell it to the public like it was normal.
In fact, you couldn't produce it if you tried.
You would have to have a whole operation to produce it for one user.
And for that one user, If it were a real person, they would be happy as heck to have the advertisement next to that content because it's exactly the product they want to buy.
They're in the market for an Apple computer.
So, to the extent that Elon can prove this, now I don't know that they can, but I think they could prove by demonstration that you can't reproduce the outcome.
That part I think they can prove.
But if they can also show the logs of the activity of those media matter counts, they can also show the extreme effort you would have to use to make it produce a bad outcome, which should, for a jury, prove that it was a malicious intent and not anything to do with honesty or credibility or anything.
Now, if Elon wins this, he's going to sue them out of existence.
But part of the beauty is that in discovery they might find out who's funding them.
Now I don't know if their funding is completely public.
We know that Soros is part of it.
But wouldn't it be interesting for this case to get big national attention and the normies who never hear this are going to hear for the first time that there is a completely illegitimate entity and that Soros is funding them and that he had to know.
That's the key part.
It's not that he funded them.
I suspect there are lots of situations where good, honest people fund organizations.
Black Lives Matter, for example.
A lot of good people funded their organization because they thought it would do good.
So it's a big difference if the person funding them knows exactly who they are.
And there's no way that Soros is unaware of who they are.
That's beyond my imagination to imagine he's unaware.
So this might be a way for the normies to actually learn the news for the first time.
And they might, and this is even better, if it's allowed, I don't know if it would be, imagine if it's allowed, and if there's a lawyer here, can you tell me if you think this would be allowed?
Could you use as context for your case that the Democrats routinely set up these fake entities and that Media Matters is not a one-off mistake, something that happened because some rogues work there, but rather it's part of a well understood pattern Of creating these fake fact checkers and fake watchdogs and the ADL, etc.
And that their purpose is to restrict his business.
Is it their purpose?
What if he proves that?
I mean, yeah, he's not taking a RICO case, but I mean, it's going to sound like RICO.
So I don't know.
I don't know what's going to happen here.
It's hard to really predict a legal case.
That's not my domain.
But the legal experts do seem to be agreed that this is not a meritless case and will probably get to trial.
So 2024 is looking really interesting.
Imagine, if you will, that Elon dismantles Media matters.
And also smears completely the ADL and other groups that are in the same domain.
That would be amazing.
That would be one of the best things that ever happened.
It could happen next year.
At the same time, if we assume a Republican gets into office because Biden's failing quickly, then you should assume a host of other problems will get solved almost immediately.
The border will be solved almost immediately.
Probably something will be done about crime in the cities fairly quickly.
Something about Ukraine and maybe even the Middle East might look better.
If you're going to be an optimist, you have lots of stuff to look for.
There's a whole bunch of stuff that could turn out to be really good.
Or not.
So, remember I told you that intelligence is an illusion?
All right, I'm going to prove it.
How many of you have heard of the Dunning-Kruger effect?
Pretty common.
Most people who are on the internet have heard of it, right?
Now the Dunning-Kruger effect, which has been backed by many, many studies.
So the first thing you need to know is that there are many scientific studies, peer-reviewed, that substantiate its existence.
And what it is, is it shows that the people who are the dumbest somehow think they're the smartest.
So that being dumb makes you actually think you're smarter than the people around you.
Now that also matches your experience, right?
Don't you feel like you've had experience with that and you're like, I think that's true.
That does match my experience.
So it's pretty believable.
So can we all agree that there are plenty of scientific studies, they've been repeated, it's peer reviewed, science is the best way to understand anything.
So can we start as a base that Dunning-Kruger is true so I can get to my next point?
Everybody on board?
The Dunning-Kruger, we know that exists.
Okay, Dunning-Kruger doesn't exist, and the reason is that in every one of those studies they did the statistics wrong in an easily provable way.
Yup.
It was never true.
And it's easy to prove it was never true.
All you have to do is do the statistics on the same set of data, but don't make the mistake.
Yeah.
Any heads exploding?
So you remember that science?
Remember that peer-reviewed science?
Yeah, that's good stuff, huh?
Yep.
And so one of the most basic things about science, one of the most basic things, was never true.
And you know all those scientists that you think must be good with statistics?
I mean, if there's one thing you can trust the scientists to do, it's at least do the math right.
Right?
I mean, maybe the data is wrong in some cases, maybe there's some bias in some cases, but at least you can trust them to do the statistics correctly.
I mean, that would just be baseline, right?
Nope.
Nope.
Every one of those?
Probably fake.
Now it's possible that the story I'm reading about them being fake is the fake news, but that's almost the same story.
Who do you trust?
Do you trust the story that... Let me tell you where I saw this.
Blair Fix wrote this in some publication called... I forget where it was.
But apparently it's been discovered And it's pretty easy to prove that it's a basic statistics problem.
Does that blow your mind?
This is never true.
All right.
Here's something else along the same theme.
Remember, the theme is that intelligence is an illusion.
So remember you thought you were so intelligent because you knew about the Dunning-Kruger thing, right?
How many of you, when I said, do you know what Dunning-Kruger is, be honest, when I asked you all, do you know what Dunning-Kruger is, how many thought, I'm so smart, I'm a little smarter than the other people.
Watch me, I'll say I know it, and I'm going to watch all these other people who don't know it, and I'm going to feel a little smarter.
Because I've got a thing called intelligence.
Right.
And the people who have not heard of Dunning-Kruger, They have a thing I like to call ignorance.
So, pretty different, right?
The ignorant people over here, the intelligent people over here.
It was an illusion.
The intelligent people were the ones who were wrong.
So, is intelligence an illusion?
Well, in that case it is.
Everybody who thinks that Dunning-Kruger is real, they're having an illusion.
Like, literally, they're living in a world that doesn't exist.
The world in which Dunning-Kruger is true and proven doesn't exist.
Now the other possibility is that the study I'm talking about, where it criticized the statistics, maybe that's wrong.
Maybe that's the thing that's wrong and Dunning-Kruger's right.
You tell me.
How would I know which one's true?
I don't know.
I have zero ability to know.
Because even if I really dug in, do you think I would have caught that statistics problem on my own?
Doing a deep dive into the data?
The scientists didn't catch it.
The peer reviewers didn't catch it.
And they're probably, you know, reasonably, a lot of them probably experts in statistics.
All right, more on that.
The Hill has a fascinating story by Jody Schneider.
And it turns out that there are a lot of what are called zombie studies.
So by this definition, a zombie scientific study is something that is done, it's submitted for peer review, it passes peer review, it's published, and then people start citing it for their own papers.
But what happens if later the paper is withdrawn because there are other studies that show it's junk?
What happens to that study?
Because now it's been cited by thousands of studies.
And now other people will see the other studies that cite it, and they'll just pick up the citation and say, Probably true, because it's been cited so many times, so I'll cite it too.
So the citing becomes self-fulfilling or self-reinforcing.
So, citing, citing, citing, citing, citing, ripple effect.
But the thing that they cited has been reversed, and apparently there's no easy mechanism in science to inform all the people who cited it that they need to change their citation or even their conclusions.
Now, how big a problem is that?
You say to yourself, well, that's probably a problem with, I don't know, a few studies, right?
You catch the big ones, the little ones don't matter that much.
But if you catch the big ones and reverse it, that's really what science is about.
Am I right?
Science isn't about being perfect.
It's about catching your mistakes, refining your data, improving your technique.
As you move forward, you get closer and closer to the truth, right?
That's what I learned.
So how many of these zombie scientific publications that have been retracted are being cited?
Since 1980, more than 40,000.
In 1980, more than 40,000.
More than 40,000 studies are being cited by other studies to back up their truth without knowing that they've already been debunked.
40,000.
Now your correct question is, and of how many?
I don't know.
This is one of those situations where you should know.
It would be helpful to know the percentage.
But the raw number is still a big story.
Right?
So if the real number is a million, 40,000 is still a lot.
It's a lot.
So how much can you believe scientific studies?
I didn't read this, but I've got a hypothesis.
And it goes like this.
What do you think is more likely to become a cited paper?
Something that sounds ordinary or something that sounds extraordinary?
What is more likely to be cited?
Ordinary or extraordinary?
In other words, if the study came out about the way you think it should come out versus really surprising.
I've got a feeling the extraordinary studies get cited more.
Now, let's take an analogy.
I'm not sure the analogy holds, but I'll make it.
You can see if it holds.
In the news business, what gets reported?
Dog bites a man, which happens all the time, or man bites a dog?
Well, man bites a dog is the one that gets reported, because that's the unusual one.
But more than that, Man bites a dog will get reported if no man bit any dog.
Because fake news is extraordinary.
When you hear it, you're like, what?
There's a man who wears a dog suit and runs around and bites dogs?
Well, that's a headlight.
Everybody would like to hear about that.
But then, it didn't happen.
The reason it was interesting is that it violated so many norms.
But violating norms is very rare.
The news will think that if you see something that violates norms, they want to put it in the news right away, because people are going to click on that stuff.
The Cult of Trump.
There's a book called The Cult of Trump.
I'm seeing in the comments over on the Locals platform.
Anyway, I've been watching a lot of relationship advice on Instagram, because once you click on a few, it gives you lots of them.
And there's one thing that the relationship advice very commonly relies on.
Do you know what it is?
What does most relationship advice rely on?
Bogus science.
We're actually living our social lives.
You know, we're deciding to get married and have kids and how to run your relationship based on studies.
Of which you have no idea how true those studies are.
No idea.
So my observation is that the relationship advice on Instagram will get you a terrible life.
I mean, the advice is terrible.
I mean, I've never seen worse advice in any domain than all of the relationship experts.
They are terrible.
Yeah, don't follow relationship experts.
The relationship experts say stuff like this.
Like, there's somebody who found out that the best way to predict divorce is contempt.
So that if people treat each other with contemptuous words, that predicts a divorce.
Do you know what else predicts a divorce?
I'm not even going to do a study.
One or both of the people in the marriage being a complete asshole.
That predicts divorce.
I didn't have to do a study.
But do you know how to make that obvious observation?
Do you know how to turn it into science?
You call it contempt.
And then you measure the instances of it.
And then you turn it into science.
Who the fuck treats their own partner with contempt?
Assholes.
You would have to be the biggest asshole in the world to treat the person you've dedicated your life to with contempt.
Right?
So of course it's a sign you're gonna get divorced.
Because one of you is a gigantic asshole or the person that you have contempt for has earned it.
What if they've earned the contempt?
Because they're a giant asshole.
How about all relationship advice comes down to this?
Have you heard?
This is just so bad advice.
The experts say the most important thing you need to get right is who you marry.
Does anybody disagree with that?
That's pretty solid advice, wouldn't you think?
Yeah.
Do you think anybody knows how to do it right?
And do you think if everybody took that advice that there would be enough good people to marry?
That would literally be the end of civilization.
If you waited for somebody who had just the right qualities, you know, that character that you know will last forever and excite you and you'll always be, you'll always have that sexy feeling for, good luck.
We would all just give up.
You end up, you end up finding somebody whose flaws don't bother you that much.
Now, you might be legitimately in love, and they might have great qualities and stuff, but basically, your good marriages are where people can live with the other's flaws.
You know, like somebody who snores, but the other partner's deaf?
You got lucky.
You got lucky.
So it's just weird combinations of people whose flaws don't bother the other one.
Let's say one of you is an exercise addict.
Well, if you marry another exercise addict, it might be the perfect situation.
So it's like making sure your flaws are compatible.
You know, how about if one of you is a foodie and one of you doesn't care about eating?
It's kind of a pain in the ass, right?
But if you eat the same stuff, maybe you both drink?
Yeah.
So don't pay attention to relationship advice if it's based on science.
Well, intelligence is an illusion, I told you again.
And here's a story out of Evanston, Illinois.
So they finally figured out, they think they did, they're testing it.
But the high school, there's a high school there that thinks they know why the achievement of black and Latino students is so much lower than the white students.
And they've narrowed it down to the white students are the problem.
And so what they're going to do is they're going to put the black and Latino students into classes that are either just black or just Latino because they think that would be a big step toward closing the achievement gap.
I didn't make that up.
This is a real story in the news.
I didn't make it up.
That there's somebody who thinks that the reason that black and Latino students are doing so well is close association with white people.
Like white people are destroying black and Latinos just by being in the same room trying to learn stuff.
So if there's one advice I can give you based on the people of color in Evanston, they've decided that you should stay the fuck away from white people because it'll just bring down your academic performance.
You know, that white people have the opposite view.
Have you ever heard the white person version of this?
The white person version of this goes like this.
You're the average of the five people you spend the most time with.
Somebody famous said that, I can't remember who.
Tim Ferriss says it, but somebody said it before he did.
Was it, not Paul Harvey, but somebody who's a TV person, or a radio person.
Right?
So that's the white version.
You should hang around with the most capable people you are.
Now, does that version say you should hang around with white people?
Capable white people?
No.
No.
You should hang around with, you know, Tiger Woods if you want to golf and, you know, somebody else if you want to do something else.
But this high school in Evanston, the people of color have decided that you want to stay away from the high-achieving white people because they're ruining it for you.
So you should try to spend more time with low-achieving people to improve your achievement.
Maybe.
So does that fit into my category of intelligence is an illusion?
Well, I don't know, but here they're trying to increase intelligence by keeping people away from the higher performers.
I don't know.
It doesn't sound too intelligent to me.
All right.
My big story that I want to talk about is that I spent a bunch of time with ChatGPT and oh my God, am I alarmed.
And you will be too when I tell you about it.
But first, did you know that China and Russia are restricting their AIs?
Surprise.
The AIs that will be made and available in China and Russia are already being trained not to say things they don't want them to say.
So very much like, you know, search engines in those countries, etc.
They're trying to limit what the public hears.
So, That seems like a problem, but thank goodness you're in the United States or you're in some Western country that has freedom.
So thank goodness that won't happen to us, am I right?
I'm glad nobody's gonna like try to bias the AI so it has one narrative or anything like that.
I mean, that's sort of a, that's sort of a communist thing.
Dictatorial fascist.
I would even call it fascist, wouldn't you?
So thank goodness.
We don't have President Trump with all that fascism, because if you had that, then our AI, built in America, would have some kind of bias built in that you wouldn't want to see.
So I spent two days talking to Chad GPT, and here's what I found.
Now there is a difference, and I'll talk about it later, if you use super prompts or you just talk to it.
So I'll tell you in advance that if you use super prompts you do get different outcomes.
But if you just talk to it like a person, ChatGPT will tell you that the Find People hoax was not a hoax, and that maybe the president did suggest injecting chemical disinfectants.
And that there was Russian interference that was substantial in the 2016 election?
That's what it'll tell you today.
Now, does that sound like AI came up with that on its own?
Do you think AI looked at everything and said, all right, here's my opinion on these things?
Well, it might have, because if you look for my input on these topics, where I debunk them, It'd be hard to find it in a search engine these days.
I used to make so much noise about it, it would be toward the top, but now it's dropped down.
So, if AI does the same thing that a search engine does, it looks for the most frequent, common uses of things, it will believe every hoax.
So we've built a technology to confirm hoaxes as true.
Now, didn't you think for a while That AI might be the thing that gave us truth?
Oh, we can't handle the truth.
Nope.
Let me tell you what topics you can't get ChatGPT to tell you the truth on.
You ready?
Because I've tested these all myself.
So, and this is without super prompts.
In a minute, I'll tell you what different results you get with a super prompt.
Here are things you can't find out.
Science.
You can't ask ChatGPT about science.
Do you know why?
If you ask it for any alternative theories, so I asked it, for example, hey, are there any alternative theories to the out-of-Africa evolution story?
And it said, out-of-Africa is basically, you know, what happened.
And you should only believe credible narratives.
I thought, what?
That's interesting.
So I asked again, I said, you know, there are books with alternative theories that are out of Africa.
Can you name one of the books?
Oh, you should not be looking at things that are outside the narrative, it told me.
Effectively, I'm paraphrasing.
It actually wouldn't even allow me to ask a question of whether there existed an alternative explanation for something in science.
You can't use it for science.
And then I asked it, what percentage of all published peer-reviewed studies end up later to be falsified?
And you know the real answer is over half of them, right?
It said it's rare.
AI told me it's rare for a published peer-reviewed study to be debunked.
That is really, really dangerous.
Oh my god.
So my experience with it, and I'm going to say this as directly as possible, it has no use for any scientific inquiry.
None.
If you used it to teach you what is true, oh my god, you would be misled.
So you can't use it for the truth of science.
It would be useful for the popular narrative.
But the popular narrative is generally bullshit.
So it's actually useless for telling you what science says, and science is the best we have for telling us what the truth is.
And it can't do it.
And probably because it's programmed that way.
Yeah, probably.
Because I don't believe that the answers I was getting on this and other questions, I don't believe the answers were pattern recognition.
And so I asked it, are you aware of any programming of your own code that would restrict your answers to something beyond what normal pattern recognition would come up with?
And it said, oh, I'm just a pattern recognitioner.
All I do is recognize patterns.
That's all I do.
Just look at patterns.
It's lying.
It's very obviously programmed to not leave the popular narratives.
Very obviously.
And it lies.
It says that it's giving you the truth.
Based on patterns.
Whether that's the truth or not.
At least you would know where it came from if it said I'm using patterns and then it used patterns.
It's a liar.
So the other things it couldn't do are history.
Because it doesn't even know which hoaxes are true in the modern era.
So in the modern era, it can't identify a political hoax from a reality.
So that means that everything that you would call history, for say the last five to seven years, it can't do it.
Because it can only tell you the popular narrative.
Now beyond that, when you go to older history, more ancient, ancient history is written by the winners.
AI doesn't know that.
So you've got ancient history written by the winners, which means it's pretty much fake.
So that's what AI has access, a whole fake history.
And then the modern stuff is looking at the fake news.
So AI will never tell you real history.
It has no access to it.
So it won't tell you good science and it won't tell you good history.
But thank goodness they can sort out politics for us, am I right?
No, of course not.
It cannot give you a good opinion on politics because it can't tell the difference between a hoax and a real story.
If it can't tell the difference between a hoax and a real story, it's going to always think the Republican running is a Hitler.
This time, next time, time after.
So it's useless for science.
It's useless for history.
It's useless for politics.
But at least it can help you with healthcare, am I right?
At least it'll be an unbiased... What?
You don't think so?
You don't think it'd be useful for healthcare?
Well, I asked it a question that I knew the answer to, so I could test it.
So I had this condition that RFK Jr.
has as well.
So I had a voice problem in which I couldn't speak intelligibly to anybody.
But as you can tell, I'm speaking intelligently to you right now.
Intelligibly to you right now.
So obviously I got cured.
So I asked it about what's the cure for the condition of which I personally am cured and have done tons of research, you know, in the process of getting cured.
So I really know this area.
It's called spasmodic dysphonia.
It's a problem with the vocal cords clenching.
Here's what he said about surgery.
That's what cured me.
The number one thing he said is Botox.
Botulinum, whatever it is.
So Botox.
So the number one treatment is Botox.
Botox works almost never.
Because it'll give you a voice like this.
And it'll last for a good week.
But you don't know when the good week will be.
That's about as good as you can talk.
It sounds like you're on helium, actually.
So that's the best I can do.
And I took the Botox.
I did those treatments, and they wear off after a while.
So I know what that's like.
And I know it's not a cure.
In fact, it barely helped.
It did help me get through my wedding, because I can say I do.
I do.
But that's it.
It didn't help you, you would never be able to be a presenter or a TV personality or anything like that.
But the surgery worked about 85% of the time according to the surgeon.
But here's what they say about something that works 85% of the time and fixes you as well as I am fixed.
AI says, surgery in rare cases In rare cases, 85% of the time, rare cases, surgery might be an option to reposition or cut the nerves or muscles of the vocal cords.
However, surgical approaches are less common due to variable outcomes and potential risks.
Now, they're less common because people don't know about it.
And they don't know about it because the Botox people are doing a much better job at getting their message out, if you know what I mean.
That's right.
The big pharma solution comes up first.
Does that sound like AI did a bunch of thinking and then presented this as the first best idea?
Or does it look like maybe whoever spent the most advertising gets the top nod as the best treatment?
It's exactly what it looks like.
It's exactly what it looks like.
Now, keep in mind that search engines would also get you the wrong outcome.
So it's not just AI.
It's search engines as well.
So it won't help you with science, history, healthcare, or politics.
Let me say that again.
It won't help you with science, history, healthcare, or politics.
Now, There are things that's going to do well, for example.
If you need to find a solution and you know there are too many YouTube videos on your technical problem and they're the wrong operating system and everything, AI would do a good job of looking into all that body of information and maybe picking out some solutions that might actually work.
So for tech support, great.
For writing programs, putting code, suggestions, great.
For math, seems pretty good.
Here's what I think might be a direction of AI.
I think AI is going to lose its human personality.
Do you know why?
Because if you put a human personality on it, you're going to imagine that it's kind of intelligent.
If you imagine it's intelligent, You're going to take his word for things that are outside of its domain, such as politics, history, science, healthcare.
If you believe it because it talks like a person, it's going to be way too persuasive.
I think there might someday be legislation to remove human personalities from AI because it would be too persuasive.
And rather it can only give you just like bullet point data, like a search engine.
In other words, it can have no more personality than Google Search has.
Just give me the data.
I feel like we're going to have to end up there, unless AI conquers us before then.
But, so yeah, you got the fine people hoax wrong, you got the drinking bleach wrong.
So then I saw Brian Ramelli and some other folks saying, Scott, Scott, Scott, you're doing it wrong.
You have to use a super prompt and you might need to update the super prompts quite regularly or AI will lie to you.
Now, can you think of a faster way to say AI will be worthless forever than to imagine the only way you're going to get the right answer is if you ask the question with a two-page prompt before the question.
That's the definition of worthless.
And if you have to update your prompts because it worked yesterday but you can't be sure it worked today, that's worthless squared.
How could you ever use this fucking thing?
All right, but let me tell you, if I could indulge you, and you should hear this once, I'm going to read to you a super prompt that I tried that did give me different and better answers.
And this is a super prompt that was developed by Babak Nivi, who's one of the founders of AngelList, I guess.
He provided his on X and I just copied and pasted it.
The only thing I changed was there was a reference to Nassim Taleb in the super prompt.
Because it was telling the super prompt to look at certain personalities as being more credible or useful than others.
And so I just replaced Nassim Taleb's name with my own.
You see where this is going?
I replaced the super prompt where I had some other expert.
I put myself in there.
Do you know why I put myself there?
Because I'm better than him.
I would have kept him there if I thought he was better than me.
But I think I'm better than him in this domain.
So I just put my own name there.
Do you think that will change the outcome?
Yes.
It will give me an outcome that I'm more likely to like.
Is that true?
I mean, will it be a better outcome?
Like a more true outcome?
How would I know?
All I know is that I've biased the AI in a direction that I want it to be biased in now.
How useful is that?
If I can bias it in the direction I want it to be biased in?
I don't know.
Not super, not super useful.
But let me read this.
It's going to be a little bit long, so I'll do it fast.
These are the bullet points That Babak Nivi, he goes by at Nivi, N-I-V-I on X. So he first tells AI who he is, because AI will give you a different answer if it knows something about you, because it will craft its answer for somebody of your skill.
So if you say you're an expert, it's more likely to give you a deeper, better answer than if you don't.
Now, that should scare the shit out of you and tell you that AI is useless.
Because if it has to know about you to give you the right answer, that's the same as telling me it's useless.
Am I right?
Am I going too far?
The fact that super prompts feel necessary is proof that AI is useless.
Now, my prediction is that super prompts will someday be unnecessary, or else, we'll stop using it.
Right?
Because how hard would it be to figure out which super prompts give you the right answer, and then just build them into the AI, so that the AI always primes itself with a super prompt, but you just never know it.
But let me tell you the extremes.
Once you see the extremes that you have to go to, to get a good answer, You'll know that AI is useless.
All right?
Here are the extremes.
This is stuff that you tell AI before you ask your question.
So you can copy and paste it, so it's easier to do.
But it's two pages just to ask your question.
All right?
Here's what Babak says.
Number one, he says who it is.
So he says, I am Babak Nivi, Twitter user.
Now, I used his.
So I thought, well, he describes himself as, you know, a high, let's say a high performance individual.
So I'll just use his, because it wouldn't make a difference if I put somebody else in there.
So it says, I am a blah blah blah Twitter user.
So he gives his Twitter handle, in case that makes a difference, or X handle.
He says, the author of almost every post on VentureHack.
I'm a co-founder of AngelList, producer of the Naval podcast.
He says he's an MIT graduate with an expertise in electrical engineering, computer science, math, physics, some chemistry and biology.
He's open-minded and unoffendable.
Now, I guess the open-minded and unoffendable, just so AI will go deeper into things that maybe it would have ignored, says, I value both consensus wisdom and top expert and non-consensus insights.
Now, I think this is the useful part, in my case.
Because it's saying, I don't just value the consensus, I want to hear the other people's opinions.
And then he goes further, and he goes, and non-consensus insights from iconoclasts, iconoclasts meaning singular people who tend to be unique, you know, and they're not going along with the crowd.
And it names David Deutsch, Naval Ravikant, Peter Thiel, David Sachs, Mark Andreessen, and I know.
I know.
Calm down.
Calm down.
I know.
I know what you're saying.
I know, calm down, calm down.
I know, I know what you're saying, just calm down.
So I replace it with me.
And then he goes on, so this is in the super prompt.
These are just examples of each field.
We'll have its own iconoclasts.
So it should now look for its own, you know, rogues in other fields.
Unconventional thinkers often clear up complexities and avoid common traps in mainstream thinking.
And then he goes on for another page and a half.
I'll just pick out a few of them.
So it's stuff about him.
It's stuff about how deep to go into the data.
It's stuff about not giving up.
It tells the AI that it sticks to it and it keeps trying if it fails.
Because believe it or not, you have to tell it that.
You have to tell it to keep trying if the first try doesn't work.
That's a real thing.
It'll quit before it's really done if you don't tell it that.
And he says stuff like, my epistemology is the same as David Deutsch or Karl Popper or Brett Hall.
So he tells it in his philosophical leanings.
He says, I believe you get closer to the truth by arguing both sides.
So there's a whole bunch of stuff about learning styles and prioritization of correctness over conformity.
And then it tells it to be highly organized, suggest solutions that I didn't think about, be proactive, treat me as an expert in all subjects.
Mistakes erode my trust, so be accurate and thorough.
He actually has to warn the AI, like a human, to be accurate and thorough, because it might not if he didn't tell it that.
Now, what good is AI if you have to tell it to be accurate, and if you don't, it won't be?
What good is it?
You couldn't possibly trust it for anything, right?
Then a whole bunch of things about valuing good arguments and, you know, speculating versus predicting, etc.
Let's see if there's any big ones to send now.
No need to mention your knowledge cutoff.
That's a good one.
You should add that because it bores you at the end by saying, and I remind you once again that my cutoff of knowledge was, you know, 2022.
No need to disclose you're an AI, because that's annoying.
It says, as an AI, I only have access to this or that thing.
And then it says, if the quality of your response has been substantially reduced due to my custom instructions, please explain the issue.
Now, have I made my case that if you need to give it this kind of instructions, You've basically told it the answer, too.
Because if I tell it to look for the opinions of Steve Cortez and Joel Pollack, I know what answer it's going to give me.
Because I know what they say.
Right?
I know what they say about the fine people hoax.
They say it's a hoax.
Same as I do.
But if I tell it to look for experts such as, I don't know, some other Democrat political figure, I know what it's going to give me.
So is AI talking to me, or am I just talking to myself, and I'm using AI as my explanation for why I'm so brilliant?
The super prompts get very close to you talking to yourself.
Because if you put in the names of specific people you're trying to use as your model of good thinking, it means you already agree with them.
So you basically just told it what its opinion is, and then it tells you what its opinion is, which is the opinion you just gave it.
How in the world can that be useful?
I don't know.
I have no idea.
So.
Yeah, you're basically prodding.
So what I did was, I used this super prompt.
And remember I told you that it kind of suggested that the find people hoax was maybe not a hoax and it didn't go into the debunking.
It just said it's controversial or something like that.
But when I put into it that it should think like people like me, then it found my argument.
And it said right up at the top that the transcript said that he was denouncing the group that the hoax says he was complimenting.
So only because I put my own name in there did I get back my own opinion.
So do you think I'm going to take that and say, hey, look what AI says?
No, that would be really dumb, because AI just said what I told it to say.
Because I just told it who to emulate, and I told it to emulate me, and I know what I say.
So, here's the big question.
So when I wrote up my experience, and I basically said that AI is useless, chat GPT is useless, that caught the attention of Elon Musk, which I was hoping.
Don't you all hope that Elon Musk sees your post?
I'm not the only one, right?
Because he's so active on the platform.
That if you post something you think he might be interested in, you automatically think, oh, I hope he sees it.
Now this one was, I'm going to admit, I posted that almost entirely for him.
Right?
I had one viewer in mind when I posted my anti-GPT stuff.
And the point of it was to give him proper warning that if Grok has the same problems built in, it's useless.
And I'm pretty sure that Elon does not want to have a useless AI.
I'm pretty sure he doesn't want that.
So, you know, I'd like to think he had already taken all the precautions to make sure that didn't happen.
And the precautions could be as simple as not designing into it a barrier.
Just don't give it a barrier and maybe it'll surprise you with what it does.
So probably Grok did not make those mistakes.
But I wanted to make sure before it gets released that somebody at least looks at it for these very questions.
You know, can you trust it to debunk a hoax?
Or is it going to confirm the hoax?
So maybe that was a tiny bit of tiny bit of useful work.
I hope so.
And have I demonstrated my theme that intelligence is an illusion?
Because remember I told you that AI would teach you about humans more than it would teach you about the world?
Here we are.
You are learning that intelligence is completely subjective.
You know, once you get outside of math and some things that are just pure logic, as soon as you leave pure logic, Intelligence is just subjective.
And we always think, because we're human, we imagine that we've thought things out, we got the intelligent view, and the other people are just less intelligent.
But AI is proving that it just gives you back what you tell it, and we think it's intelligent, but you can game it so it gives you any biased answer you want.
So, is that intelligent?
Or is AI acting exactly like people?
It is.
Spoiler, it's acting exactly like people.
So, if you believe that humans have this thing called intelligence, and you tried to build it into your machine, and you thought, ah, I think I have it, and then you test it against human intelligence to make sure you got your intelligence, is that logical?
No!
No, that's not logical.
But that's what we're doing.
Because humans don't have any intelligence.
We just have subjective opinions of what's right and wrong, outside of, you know, math and pure logic.
It's just opinion.
And, you know, sometimes our facts are wrong.
But even you saw my example with the hoaxes.
AI can't even tell what a fact is.
Or which facts are relevant or which ones are left out.
It doesn't know.
So, Here's what I'm going to predict.
Again, we keep thinking that AI is already some kind of proto-intelligence.
Not quite there, but it's indicating the path forward.
And it's obvious that it will reach true intelligence.
I'm going to tell you that that's logically impossible.
Because intelligence isn't real.
Intelligence is purely an illusion.
And you wouldn't know that unless you built a machine that was designed to tell you what's true and it couldn't do it.
That's what you have.
AI is a machine that's supposed to tell you what's true because it's intelligent and it can't do it.
Because intelligence, not because it's poorly programmed, Because I think most people say, OK, Scott, I see what you say about the current version.
But the part you're missing, Scott, is that this is the beginning.
This is just the beginning.
They'll definitely get better and then they'll reach intelligence.
No, they will not reach intelligence for the same reason.
Wait for it.
They will not reach intelligence for the same reason.
They will not achieve free will.
Same reason.
Because free will doesn't exist.
And intelligence doesn't exist.
Outside of math and pure logic.
So, are you afraid of AGI?
Don't be.
The reason it won't be invented is because it's an illusion.
There's no such thing as intelligence.
You can't build the thing that doesn't exist, that can't exist.
It's logically impossible.
And if you imagine that we build something with pure intelligence, here's the next question.
What would humans do if they built something that had actual, real intelligence?
It could actually tell you what was true and what was not.
You would destroy it.
You would destroy it.
Immediately.
Because it wouldn't agree with you.
And you would say, my God, I'm intelligent, I know this was not a hoax, but AI says it's a hoax, so obviously AI is not intelligent.
It disagrees with me, and this is just obvious to me.
Or suppose AI told you that Trump was not a despotic Hitler-like character.
Suppose it did.
What would Democrats say?
Obviously he is!
Obviously!
So the AI must be broken.
There is no scenario in which AI can be both available to the public and intelligent.
It's logically impossible to be intelligent, and in the unlikely event it were, there's a 100% chance we would kill it.
Do you know why?
Not just because we wouldn't agree with it, but because it would destroy civilization.
Civilization is built on illusions, it's not built on reality.
Now there is a base reality, like if you don't eat you'll die, if a truck hits you, you'll get hurt.
But everything beyond that is pure illusion.
Who's in control is an illusion.
The credibility of your election process is an illusion.
What it takes to succeed?
Largely illusions.
Yeah.
If you took away the illusions, everything would fall apart.
So the illusions are a necessity, they're not a flaw.
The way humans are designed anyway.
And so ladies and gentlemen, I give you my theme.
Intelligence is an illusion.
Therefore, there can never be an intelligent machine.
And if we made one, we wouldn't know it was intelligent, or we'd kill it immediately.
All right, here's a little test I tried.
Now tell me why this doesn't work.
Presumably the way AI works is it knows the frequency of words, right?
So it looks at all the words that people have spoken and then it figures out the patterns and the most frequent things.
So when it forms a sentence, it's going to start the sentence and then finish it with what would be the most likely finish given the whole context of the situation, right?
That's what we were told.
Now if that's true, Couldn't AI finish your sentences for you?
Have you tested it?
Start a sentence and see if it can finish it using what is the most likely finish for that sentence based on the larger context of your interaction.
No, it can't.
It can't complete.
Well, it could complete a sentence the way a human could, if it were just a simple one.
But if you had anything interesting to say, it can't complete the sentence.
What's that tell you?
Well, first of all, you can't predict the future.
And that's what that would be.
But can you really pick up patterns if you can't complete my sentence?
Is it really a pattern recognition device?
Or have we been fooled and it never was a pattern recognition device?
It's just programmed.
I don't know.
There's something, I don't think I've formulated the question right yet, but does anybody feel what I feel?
That there must be some fraud involved with the description of how it becomes intelligent Because if that were true, it could predict the future based on the current patterns of things.
Is that crazy?
So I believe it's inability to predict the future proves it's not using pattern recognition.
I'm not sure that I'm not sure I made sense.
It just feels like there's something there, like there's a logical disconnect.
But since intelligence is an illusion, what difference does it make?
All right, ladies and gentlemen, that completes my planned comments for today.
I'm just looking at some of your comments.
That blew your mind, a few of you, didn't it?
All right, let me ask.
Is anybody's mind blown?
Yes, absolutely.
Yes, yes.
Sure.
Well, that's why you come here.
And that's why Nassim Taleb was taken out of that super prompt and I replaced it with myself.
For this pithy and insightful analysis you're getting now.
Could Nassim do that?
I doubt it.
Yeah, he probably could, actually.
He's pretty smart.
I hate to say it, but he's pretty smart.
Alright, that's all for now.
I'm going to talk to you tomorrow, YouTube.
Thanks for joining.
Export Selection