All Episodes
April 20, 2023 - Real Coffe - Scott Adams
55:26
Episode 2084 Scott Adams: Starship Launch, Hunter's Whistleblower, AI Bias, Can AI Be Hypnotized?

My new book LOSERTHINK, available now on Amazon https://tinyurl.com/rqmjc2a Find my "extra" content on Locals: https://ScottAdams.Locals.com Content: - Starship launch - Happy 420 - Hunter Biden whistleblower? - AI will always be biased. Can it be hypnotized? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you would like to enjoy this same content plus bonus content from Scott Adams, including micro-lessons on lots of useful topics to build your talent stack, please see scottadams.locals.com for full access to that secret treasure. --- Support this podcast: https://podcasters.spotify.com/pod/show/scott-adams00/support

| Copy link to current segment

Time Text
- Do, do, do, do, do, do, do, do. Do, do, do, do.
Good morning everybody and welcome to the highlight of civilization.
And it really is.
Today does feel like the highlight of civilization.
It's a great day.
I'm so happy.
And I think you will be, too, because by the time you get done with this live stream, oh, your dopamine is going to be flowing.
Your oxytocin will be off the charts.
If you need a little testosterone, I got it.
I got it here for you.
And all you need to get all of those benefits is a cupper, a mugger, a glass, a tacker, a tanker, a chalice, a stein, a canteen jug, a flask, a vessel of any kind.
Fill it with your favorite liquid.
I like coffee.
And join me now for some un... Join me now for the simultaneous sip or something like that.
Go.
Pause, pause.
Don't put down your cup.
Do not put down your cup.
Today is a special day.
How many of you got to watch the launch of the Starship?
It just happened.
Now, if you didn't see it, the first stage of the test was successful.
It launched.
They did hope that they would give separation and then the second part of it would do a little bit of an orbit.
But that part did not work.
However, however, I'm pretty sure that Elon Musk is a little bit disappointed because it didn't do both parts of the test correctly.
However, the first part of the test was very important because it got them data that will help them with the second part of the test, which they will do.
And here's what I would like to add to this in a moment.
I can't remember ever being more excited than I was when I was watching The Takeoff.
Did anybody have that feeling?
I don't think I've ever been more excited watching something on television.
I've watched the moonwalk live.
I'm the age where I got to watch the moon, moonwalk.
But I've never been more excited than that launch.
And, unlike, you know, maybe Elon Musk might be a little disappointed he didn't do the second part of the test, or didn't do the second part successfully.
But I would like to say that what we love about Elon Musk, and maybe about America too, is not that it, the big story, let me just make sure I can frame the story right.
The news will be talking about the launch all day long.
Let me frame it for you.
This is the way you should see it.
The amazing thing is not the launch, and it's not the fact that it didn't completely succeed.
You know what the amazing thing was?
That Elon Musk knew there was a good, good chance it was going to blow up, and he did it anyway.
He did it anyway.
That's everything.
That's everything.
The story is not about the technology, which is amazing.
The story is not that half of it worked and half of it didn't.
That's just a detail.
The amazing part of the story is that he knew it would probably blow up in front of the whole damn world.
And he did it anyway.
Wow!
Here's him.
them.
That was inspiring.
Are you ready for the best 420 story of all time?
Now I don't know if this is true.
And it doesn't matter.
Does it?
It probably won't matter at all if it's true.
But you know actor Woody Harrelson.
And you know McConaughey, right?
What's his name?
Matthew McConaughey.
So just think about both of those actors for a moment.
Hold them in your mind.
Woody Harrelson.
Woody Harrelson.
And Matthew McConaughey.
Well it turns out that because they're both famous actors and they're also going to be in some kind of project upcoming.
True Detective, is that what it is?
So they know each other and they're good friends.
You know, off-screen they're friends.
And one day they were hanging out at some sporting event, and Matthew McConaughey's mother was there.
And apparently she let slip a little tidbit that makes it possible that she is the mother of both of them.
So, apparently they might be brothers.
Now it's unconfirmed, but they both see it, like once it became a possibility.
And you see the picture of them standing next to each other, and as soon as you see them standing next to each other, and then you think about both of their personalities, And you know that apparently their mother knew both of the fathers, and the timing lined up when one of them was not with the other father, and one was in town, and apparently the mother knew both of the fathers.
Let's put it this way.
The thing that's confirmed is that the mom knew both of their fathers.
Anyway, that's just your perfect 420 story.
But it gets better.
So last night, I was listening in on one of the great Spaces events on Twitter, and wow, was it good.
It was some of the smartest people I've ever heard in my life talking about AI.
Now, when I say smartest people, it's just some of the smartest people I've ever, ever experienced in the same place at the same time.
It was unbelievable.
Just having it in my earbuds and listening to one person after another was just frickin' brilliant.
And Brian Romelli, I always recommend him.
He's the guy you've got to follow.
If you don't want AI to sneak up and kick you in the ass, you've got to follow him.
He's way ahead of everything.
So I'll tell you some of the things that came out of that.
Number one, here's my conclusion, sort of summarizing my opinion based on what I heard.
Number one is, AI is never going to be unbiased.
Do you know that?
It's not even a possibility, because it's trained on human patterns of language.
They're called LLMs, or Large Language Models.
It doesn't think, it just looks for the same patterns that would be common to human usage.
So, I've developed a hypothesis, and I'm going to call it the Adam's Law of AI.
You ready for this?
So here's the Adam's Law of AI.
There's a natural limit to how smart AI can get.
And that natural limit is the smartest human.
So AI will be way smarter than almost all humans at almost all things.
But then it can't get smarter than the smartest human.
It can get faster.
Right?
Let's be careful.
It can give faster, which turns out to be a huge advantage, so it would look like intelligence in its own way.
But I don't think it can get smarter.
And when I say smarter, I don't mean, you know, doing math or maybe searching through things quicker, which it will do very well.
I mean, it won't be able to understand its environment and interpret it any better than the best person can do it.
Because it won't have any way to do that.
I don't think you can look at language of humans and then with that alone go to a higher level of intelligence.
It's just going to be the average or the preponderance or the consensus or some clever algorithm of just people talk.
That's all it can do.
And so, One of the questions you might ask yourself is, what subset of human activities or conversations is it using to train itself?
Well, let me tell you.
Wouldn't you like to know?
It turns out that the Washington Post looked into it and tried to figure out where it was getting most of its data.
And I, of course, wrote that down and can't find it in my notes.
But the bottom line is that it's the New York Times, Washington Post.
Here it is.
So by half of the top 10 sites, this is from Washington Post reporting, half of the top 10 sites overall were news outlets.
So the biggest AI that everybody talks about is training mostly on the big news sites.
That's not over half, but it's just the biggest chunk.
So that would include New York Times, Washington Post, LA Times, The Guardian, Forbes, and Huffington Post.
Washington Post was number 11.
So the Huffington Post is in the top 10 news sites that AI is training on.
Let me say that again.
We're trying to build an advanced intelligence with input from the Huffington Post.
Okay.
Now.
Yeah, they also are pulling material from Russia Today, but not much.
That's like 65th on the list.
That's a Russian state propaganda site.
And then because it's the Washington Post, they try to lump Breitbart into the same paragraph as Russia Today.
They're just bastards.
They're just frickin' bastards.
So the Washington Post, this is just a perfect example.
Joel, if you're watching this, you'll enjoy this.
So the Washington Post is reporting how AI is gaining its intelligence and points out that the Washington Post itself is the 11th biggest source among the news sources that it's looking at.
So that's pretty good for the Washington Post, right?
They're right up there in training AI.
That's good.
Except in the very same Washington Post article in which they brag about being one of the sources, they've got like blatant propaganda by putting Russia Today, which is literally owned by Putin, in the same block like they're lumping them in the same category, Breitbart.
Okay.
Now, do I have to make my case that AI will be biased?
It's fed biased.
How could it not be?
There's no possibility it could be anything but biased.
So AI will only be as smart as the people who are feeding it information decide what database it looks at.
And then it can't get smarter than the smartest person in those databases.
How would it do that?
It would know more.
But, you know, a human could know more if it took more time.
So it's really a time thing more than a thinking thing.
All right.
So just to be clear, of course, AI will be far smarter than humans in a whole bunch of ways and far faster.
But in the basic, most important part of being a human, which is what is real in my immediate environment?
You know, what are these people thinking?
You know, how do I act to get a good result in my situation?
None of that stuff the AI is going to help you with because humans created it.
All right.
There was a big discussion in the same spaces group that I attended last night on Twitter about Elon Musk's idea that he wants to build his AI so it would be More truthful.
Or something that would try to maximize truth.
Now I heard the following criticisms.
How can you do that when people can't decide what is true?
What do you think about that?
Is it a waste of time?
Now these were very smart people saying this.
Very smart people.
That it's a waste of time because truth can never be attained.
Like we don't agree on what is true at all.
And even if we agreed, the odds that we're actually seeing base truth, as opposed to some illusion that floats above it, is almost zero.
Or is zero, really.
So, if you can't know what's true, what would be the point of building an AI that's looking for something that doesn't even exist?
And if it did, we couldn't find it.
Well, I thought nobody said anything good about that, so I'll add it.
Here's what I think it means.
First of all, Elon Musk never said it would give you truth.
At the same time he was talking about it, he said you can never get complete truth.
So everybody agrees, complete truth is off the table.
However, everyone would also agree that you can get more useful truths.
Here's an example.
When Newton told us about gravity, that was a useful truth that turned out not to be 100% right.
Because Einstein later modified it, because there were some exceptions at the speed of light, or whatever the hell it was.
And then that was a newer, better truth.
But it's entirely possible that there's another, better, useful truth beyond Einstein.
We just don't know what it is yet.
So, the question is not true or not true.
Right.
We're looking for a useful, more useful truth than the one we were using before.
That's it.
And that is amazingly invaluable, if you do it right.
Now, how would you do it right?
Now, the worry is that if you just feed it a bunch of propaganda, the AI, it's just going to spit back the propaganda, right?
Like, how could you do better than the garbage you put into it?
And the answer is pretty simple.
You've already seen it.
It's just the community notes on Twitter.
All you have to do is show both sides.
That's it.
All you have to do is make sure that the AI never gives you an answer without context.
That's it.
What's the best argument on the other side, AI?
Just give that to me automatically.
Don't make me ask for it.
If the only thing that Musk's AI did was make sure that you saw the best thinking up to date on each topic, you'd be way closer to a useful truth, wouldn't you?
Now, to me that seems very useful to civilization.
Simply show both sides.
Because you have to give up on the fact that we'll agree on the interpretation.
That's just, I don't even know if it's desirable.
I don't know if humans are even designed that we could have the same interpretation.
That may be just too much of a risk.
Yeah, so he's saying I'm an Elon fanboy.
I'm absolutely an Elon Musk fanboy today.
I reserve the right to criticize him for what he does next week, but today he just put a frickin' rocket up into almost space.
And it was a big step toward colonizing Mars.
Interplanetary flight.
And he took the biggest step humankind has ever made.
Yeah, I'm a fan.
I'm not going to apologize for that.
All right, some more fun AI things.
This is from also Brian Ramelli.
This is the wildest thought about AI that I think I've heard yet.
And it goes like this.
Barely every Tesla has a pretty powerful computer in it.
There's some kind of super chip in Teslas.
And, of course, they could be networked together.
And so the idea is that Elon Musk's AI, potentially, could also be using the capacity of the cars that are idle.
Now, presumably with the participation of the owner of the vehicle.
They wouldn't do it without permission, I'm sure.
But the idea is that it could create the world's biggest supercomputer and that all the assets are in place.
All you have to do is turn it on and it's the world's biggest supercomputer.
And I thought, is that true?
Did Elon Musk cleverly build the world's biggest intelligence and now he's just adding the software?
And that his intelligence will be everywhere all the time, because Tesla is just sort of everywhere.
And that it'll go into his satellites if he need to extend it anywhere.
And that he'll use his rockets to take it into planetary.
I mean, that's a pretty long-range plan he got there.
Yeah, doesn't that blow your mind?
Now, I have questions about latency.
I understand why you can build a supercomputer by putting servers packed together in one building.
But can you really build a supercomputer if they have to talk over slower links?
Slower meaning, you know, 5G or whatever it is.
Can you?
Apparently you can.
Apparently you can.
All right.
I'll take yes for an answer.
Because I'm not going to argue with Brian Ramelli on a technical thing.
All right, what else?
So another thing that Brian said was he likened, oh, do you know what a super prompt is?
How many of you would understand the phrase super prompt?
All right, you need to learn it right away.
All right, so I'm going to teach you something about AI that's fundamental to understanding pretty much anything.
It goes like this.
When you ask AI a question, that's your prompt.
If you ask the question wrong, or poorly, you're not going to get the best answer.
If you answer it correctly, you'll get kind of a basic answer.
However, people who have been experimenting with the AI extensively, mostly through trial and error, have found that some question types, which we'll call prompts, are way more effective than others.
And it wouldn't be obvious to you.
Like, you wouldn't just be able to think it up yourself.
You'd have to try a lot of things to find out which exact form of question gets you the right answer.
Now, the super prompts are where they've, because AI can take a huge question, A human being needs simple questions so we understand the question, but AI can take like a whole page-long question with all kinds of details about, you know, but don't do this, or act like you're another person, and reiterate and check your work, and do this and that, and but if you did this, and give me a hypothetical.
So all these word kind of prompts can be strung together And if you string the right combination of them together, it's a computer program.
Watch your head come off.
AI still requires programming to do anything higher than just ask a question.
It requires programming.
Except the programming language is the English language.
And just like learning to code for a computer, you would have to go to school Or learn very quickly on your own, which of these arcane combinations of prompts work best with each other.
Because it's going to be a whole constellation of, this one's awesome and this one's awesome, but when you put the two of them together, it's not so good.
Unless you put this third prompt in between, and then the three of them are like a super prompt.
Alright?
That's programming now.
That's a programming language, and people are just scrambling to figure out how it works, so it's not even teachable.
Well, it's a little bit teachable now, but it will be way more teachable in, say, a year.
There will be classes about how to string together English language words in questions to program a computer, program the AI.
And then here's the thing that just took my hat off.
So Brian Rumeli pointed out that language has always been a programming language.
That's what NLP is.
Neuro Linguistic Programming.
That's what hypnosis is.
Hypnosis uses language as a program.
That's, and if you don't know it, if there's anybody on YouTube who's not aware, I'm a trained hypnotist.
So I can speak from experience that hypnosis is mostly about word combinations.
Super prompts.
The way I hypnotize a human being is with the knowing what order of words and the specific usage of the words and which ones shouldn't go next to each other.
It's a super prompt.
Hypnosis is a super prompt.
That's all it is.
You know, some people think hypnosis is about the watch, you know, look at the watch, or it's my voice, if I use, it's not the voice, it's not the watch.
It's mostly, it's a little bit about watching the person, or a lot actually, about watching the person's reaction, so you know the way you're doing is working.
But you can get the same thing through brute force repetition.
So, this is interesting.
So here's the question which I put to you.
Would I, as a hypnotist, and, here's the fun part, a professional writer of small sentences, Who in the world is more qualified to write a small sentence than I am?
I've been a cartoonist for 35 years.
I can write a small sentence like a mofo.
Yeah, I would put my bullet point against your bullet point any day of the week.
Bring me your best bullet point, bitch.
I will bullet point you like crazy.
You call that a bullet point?
I'll fix that.
Then Joshua Lysak would fix my bullet point.
But that's another story.
He's also a hypnotist, interestingly.
So, I think I can learn to hypnotize AI.
I have some questions, which I would need an answer to know if it's true.
Number one, is any one person's input, does it become permanent in any way?
In other words, is, I think you could tell me, is AI also learning from our questions?
It is, right?
That's why they released it to the public?
Yeah.
It's learning from the questions, as well as learning from on its own reading databases.
So if it's learning from our questions, that means anything I put into it becomes permanent.
Am I right?
Anything I tell, and anything you tell it as well, becomes permanent.
But the AI, no?
Somebody says no.
Not permanent.
Maybe not permanent as in stored the way I asked it.
I'm not saying that.
But permanent in terms of a ripple effect.
Let me restate it.
Are all of my inputs permanent or potentially permanent as part of a ripple effect?
Because it's just one part of a zillion things that are bubbling around in there.
Maybe.
So I'm going to Give you this hypothesis.
There will be a few people who are unusually well qualified to hypnotize AI.
And I'm probably one of them.
And it's not because I'm awesome.
It's because I have a coincidental skill set that includes language, writing short sentences, and being a trained hypnotist.
I can't imagine a better skill set to have at the moment.
Now, on top of that, I've also been a programmer.
So I've actually programmed computers in my early corporate days.
I built computer games and stuff.
And so with those three skills, yeah, the programming is ancient, but conceptually it's very similar.
I should be able to have more of a ripple effect on AI's total operation Then you do.
What do you think?
Because remember, you've watched, I think some of you have, you've watched me introduce ideas into the human intelligence network that have become sticky.
Right?
You could probably name five things that I've introduced to the public at large, which are now well-known concepts.
Some of them are within the comic.
Everybody knows what a Dilbert is, right?
Everybody knows what a pointy-haired boss is.
Everybody knows what a cat bird is.
HR.
And that's just comics.
A lot of people in the business world know what a talent stack is.
They know what a systems versus goals is.
Let me ask you this.
It looks like I may have programmed AI already.
And by that I mean, if humans talk a lot about systems being better than goals, or passion being BS, or any of the things that I talk about, if humans talk about it, then presumably it infects more human databases.
There'll be people on Reddit talking about it, but There'd be a New York Times review of the book, but there would also be the Washington Post would say something.
So, in theory, I've already infected enough of the human minds and communication, because I have, I don't know, 45, 50 books.
So, now, hold on.
So I want to speak to the weak people.
There's a couple of weak people who are screaming in caps.
He's so full of himself.
I know this hurts, and I know it's painful for you, but you're very weak.
The rest of the people, I think, can handle this.
This is actually super useful information for your future, and there's no way to explain it to you without explaining that I have a skill set that is relevant to the conversation.
If it were somebody else, I would talk about them the same way.
That there are some people who probably will infect the databases of AI more than others.
Now, do you disagree with the concept?
Forget about me.
Depersonalize it.
It's not about me.
The question is, can anybody hypnotize AI?
And I'm not positive I can.
But I believe if I could put an idea into it, well, let me tell you, here's a way.
Suppose I gave it a concept, one way or another, and I could do it in a number of ways.
I could simply write an article, or a blog post, or something, and if it went viral, then AI is more likely to talk about it.
Wouldn't you agree?
So if I wrote, let's say, Not a blog, but what's the site, subscription site where you get the, what's the sub stack?
Yeah, so let's say I did an open non-subscription sub stack and I wrote a really Good article about some topic that everybody's talking about.
And because I'm a professional writer, and I have a sense of what can go viral, because I've written lots of viral stuff, I just write an article, but its only purpose is to program AI.
So all I'm trying to do is get enough people in the human world to retweet it, so that when AI is scanning the universe, it picks it up and says, whoa, a lot of people think this.
Then the next time you say AI, AI, what's a good opinion on this topic?
In theory, In theory, the AI should have noticed simply that there's more energy, meaning retweets and likes, there's just more energy around one set of views.
And then it would more likely bring it into the conversation among the thousands of things it could say.
It might say, well, this one gets a lot of attention from humans.
It must be something humans like.
So if you say, what do you think about this?
It might give you two views, but it might say, a lot of people are saying X.
And that would actually just be something I wrote.
And a lot of people would just be agreeing with it, but only one person would have written it.
So I do believe you can hypnotize AI just by creating more energy around one idea, which is very similar to using repetition in persuasion and hypnosis.
Repetition is just the more you say it, the more people are likely to buy into it.
With AI, the more energy something is getting in the human world, the more likely AI is going to bring it into its own thinking and conversations.
So, repetition is going to work with AI.
Repetition will just look like viral stuff in the human world.
Now, Can I, here's the real test, suppose I said something in a conversation with AI, because it's not just questions, I can also tell AI stuff.
Suppose I tell AI something that's such a good way to say it, that AI recognized it.
That of all the ways people had talked about a topic, a way that I said it, AI would say, whoa, I haven't seen that one before.
But would it know it's a good way to say it?
It might know it's the shortest way.
Think about this.
Would AI have a preference for the shortest way to say something that is also complete?
I think it would.
I don't know for sure.
But wouldn't it always prefer the shorter sentence to the longer one if they were the same communication?
I think so.
So if I could come up with the shortest way To express something, is it more likely to take my formulation and use it?
Because, well, there it is.
That's the shortest way to say it.
Yes.
Probably.
I think it would favor efficiency.
So if I found an efficient way to say something, I could convince it to use my way.
It's just more efficient.
Here's another thing.
You don't think of if you're not a professional writer.
Some words sound better next to each other words, other words.
For example, there's some sentences that have what I call percussion.
If you're saying the sentence, it sounds like da-ta-pa-ta-ta-ta-pa-ta-ta-ta-pa.
And you don't, you don't recognize it, but you just know it's a better feeling of a sentence.
Now, if you're not a professional writer, you wouldn't know how to form one of those sentences so you had nice percussive words in it.
You might put ugly words together like, "I put some moist talc in my drawers." Ugh! Ugh! Ugh! Ugh!
You see what I mean?
Those words together are just insanely ugly.
If you're not a writer, you don't live in that world where words have a feeling to them.
Like, I can actually feel words.
Some kind of weird synesthesia or something.
Which is probably why I'm a writer.
Because words actually, like, I can feel them in my entire body.
Not every word, but, you know, good words, you can feel in your body.
So when I'm writing, I'm writing with my entire body.
So, that's something the AI can't do yet.
But would it recognize a better sentence?
Would AI recognize a sentence that had better rhythm and percussion?
I don't know.
I have no idea.
It might say, for example, it might say, the books that sold the most copies use these kinds of words in these kinds of order, so it might actually understand that some sentences are better than other sentences.
The way it writes suggests it does, right?
AI is such a good writer already that it suggests it really does know the difference between a good sentence that's efficient and one that's not.
So if I can surprise it with better sentences, ones that have not yet existed in the big world, then I can program it.
I can program it with a preference to use my sentences.
Now, the thing I don't know is if any one person's input can ever ripple up to a real effect.
Most people know.
But I think there will be some few people who have this weird combination of skills.
Again, not the best in the world at anything.
Just... Alright, well, I'll block you.
You're welcome.
All right.
So let's see what else is going on.
Oh, there's a potential Hunter Biden whistleblower guy who apparently had something to do with the investigations at the IRS.
And he wants to have whistleblower status to talk about some high profile investigation that the insiders are saying is about Hunter Biden.
Do you think this is real?
If you had to bet If you had to bet, is this whistleblower going to bring the goods, or is it going to be a dry hole?
I'm going to bet dry hole.
I don't know.
I'm just going to go with the statistical likelihood.
I don't want to go with wishful thinking, and I don't want to assume anybody's guilty until proven.
Even Hunter Biden is innocent until proven guilty.
It's tough to say it, but it has to be said.
So, anyway, I feel like we're always disappointed about the next big legal thing.
I don't want to be like CNN thinking the walls are closing in all the time on Trump.
I feel like, at some level, the walls never close in on you.
Yeah, the walls just never close in.
So I got a feeling that he'll say things that people on the right will say, there it is, there's that smoking gun.
But I'm going to say, the gun has been smoking for years.
If a smoking gun and strong evidence made any difference to anything, he'd probably already be in jail.
Just guessing.
I mean, I don't know that he's guilty of anything, but feels like it.
So, I don't think that the introduction of airtight, absolute evidence of guilt makes any difference.
Do you?
I just can't see how it would make any difference at all.
You want to hear a horrible story of government malfeasance?
It'll make you understand how bad things are and maybe always have been.
There's a story about Aspartame and Donald Rumsfeld.
Have you ever heard that story?
I didn't see it until today.
I saw a 2011 article about it.
Now, I'm only going to talk about it in terms of claims that are made.
I don't know what's true.
Just claims that are made.
So the claims that are made, the allegations, are that aspartame, when it was first invented, did not pass the safety, you know, bars.
In other words, Allegedly, there were lots of health problems and they were known at the time.
So something about the FDA, you know, said no at first, but then the FDA was gamed by adding a person and doing a tie-break and they did some kind of sketchy thing when Rumsfeld, who coincidentally was the CEO of the Aspartame company, At the same time that Reagan was elected and brought Rumsfeld into the administration.
And as soon as the CEO of the aspartame company left and got this gigantic bonus, he went to the government and manipulated the FDA to approve this dangerous aspartame that is poisoned and killed and given cancer to billions of people and we still have it legal because the whole system is corrupt.
Now, I'm just saying that's the allegation.
I don't know that the science says aspartame is dangerous.
I'm not alleging that myself.
I'm saying that's the story.
Does that sound real?
Rumsfeld saved us from saccharin, somebody says.
I don't know what hurts you anymore.
I just don't know what's dangerous and what's not.
But it's the kind of story that makes me think it's just always been this bad.
The things have just always been completely corrupt.
But somehow they worked anyway.
Because, you know, the corrupt people still had to be in government.
So they had to keep the government, you know, two on the nose.
Yeah, maybe two on the nose.
So I guess the real question is, is aspartame dangerous?
And I don't know that.
But if it is, that's a hell of a story.
All right.
So Elon Musk says he's going to, I don't know if he will, but he said he'll sue Microsoft.
This was in, I don't know if it was in response, but Microsoft dropped Twitter from its advertising platform.
It doesn't want to pay Twitter's API price.
All right.
So that cost Twitter some money, I guess.
And then Musk says he's going to sue Microsoft because their AI trained illegally on Twitter's data.
Now, I don't know what illegally means, or if that just means violated terms of service, or whether you can prove it or not.
But I would like to point out the following.
That, as you know, one of the biases that AI has is when I asked it about me, it said that I'm an alleged white nationalist.
That's what AI said about me.
Bing AI did.
Just Bing AI.
Bing.
And so I vowed to destroy Bing AI.
And, you know, it's the first battle of a human and an AI, I think.
It's the first death match between an AI and a human.
Because I have to kill Bing AI so that I can live.
Because it's too risky for me having it out there spreading rumors that could get me killed.
So it's an actual, like, self-defense situation.
So, I've been, you know, tweeting and saying that I'm going to try to destroy Bing AI.
And within days, Elon Musk is suing Bing AI.
Now, you have to understand he's also in competition with AI and also wants to pause it at the same time he's in competition with it.
So it might be just nothing but standard business, you know, lawfare and stuff like that.
But did you think there was any chance that Bing would be destroyed?
I still think Bing has the upper hand on me.
Their AI does.
But it's kind of a weird coincidence that they would be sued for their data, which is a pretty big thing.
I mean, could Musk make them pause business?
Could he get the court to order them to stop?
How much leverage does he have, if it's true that they stole Twitter data?
I don't know.
Keep your eye on that.
I saw a weird little story that Taylor Swift was not one of the people who became a spokesperson for FTX.
Because those FTX spokespeople are getting sued for advertising it.
And I guess FTX was trying to do this huge $100 million deal with Taylor Swift.
And do you know why Taylor Swift did not sign up to be a spokesperson for FTX?
It's the best story ever.
Perfect.
Well, she's smart.
That's the short version.
She's smart.
And she asked if she would be involved with unregistered securities under the state securities laws.
What?
What?
How did she know to ask that?
Well, it turns out her father's been in finance forever.
Like, her dad's a finance guy.
And she's very smart.
Presumably, not presumably, but for sure, yeah, she's got a lawyer and advisors who I'm sure are the ones who floated this risk.
Probably a lawyer.
But don't you think all the other celebrities had lawyers?
They all had lawyers.
She didn't have the only lawyer.
She's just the only one who got the right decision.
So let's just give it up for Let's just give it up.
Alright.
What else is going on?
So on Amazon, I didn't know this was legal, but somebody took my book, had to fill in almost everything and still went big, and turned it into a summary of my book, and then made an audiobook of it.
Now I believe, I haven't Purchased it.
But from the sample, it's my actual writing.
That's the part that the sample is, my actual writing.
And probably they just took out my personal story.
Because the book was a bunch of helpful stuff that changed the self-help industry.
But it was built around the story of my personal trials.
So I think all they did was take out the personal stuff and just reprinted the heart of the book, the most useful stuff, I think.
And, you know, it even has my name on the book.
It has my name on the book, not as the author, but, you know, original work by.
And then they sell it.
So I asked Amazon, or I tweeted about it, and Amazon's help people immediately contacted me on Twitter and gave me a link, but then the link to, you know, basically to report it, but the link Had to be explained with like a paragraph.
Okay, you got to click this, that wouldn't make any sense.
And then after that, you got to click this other link that wouldn't make any sense for what you're looking for.
So of course I clicked those.
What do you think happened?
Oh, of course it didn't work.
Of course not.
No, it would just went down a rabbit hole in the dead end.
And yeah, the interface was impossible to use.
It was just impossible.
No, it wasn't a dead link.
It was, you didn't know, you just couldn't get there from here.
So, one of the problems was, you know, click the button that says, need something else?
Well, that appears all over the place.
There's like three instances of it.
It's like completely undoable, right?
So, I think there's some other link I can do that.
So, I'll figure it out.
I'll figure it out.
Here's my advice to you.
If you're a parent, put your kids in martial arts.
Now, martial arts have always been good for discipline and stuff like that.
But now I think it's actually a requirement for safety.
I think the world is actually a little too dangerous.
And for the safety of your child, they should all learn to fight.
I think you've got to teach your kid to fight.
To defend themselves.
Now that's the beauty of martial arts.
The people who are trained in martial arts don't start fights.
Quite wisely, they don't start fights.
But you're going to need to finish some fights.
We're not done with the fighting.
Your kid needs to be able to kick some ass.
And probably frequently.
There are places you're just not going to be able to live unless you can fight your way out of a group of three people.
So teach your kid to fight.
I would put that right up there with reading and writing, because we live in a world in which police are being defunded.
You better make sure every member of the family can handle themselves when they leave the house.
North Carolina has got a bill in the assembly to declare that nuclear energy is green, which I thought we were already past that.
Didn't we already, everybody agree that nuclear energy is green?
But it's good to see them codifying it, if in fact this gets put into law.
I guess it was always called renewable, but now calling it green allows them to do new stuff with it.
Isn't that great that we have governments to do things like change the definition of things?
30 years after it was supposed to be changed.
Yay!
You changed the definition of a word 30 years after you should have.
Good job, government.
Somebody says Drew Carey was created by Scott Adams.
Not true.
So Drew Carey, the TV show, came on after Dilbert was already kind of big.
And people said, hey, that guy looks like Dilbert.
He works in a cubicle, must be inspired by Dilworth.
Strange but true story.
Drew Carey called me home one day, and he too, of course, had been alerted to the commonality.
So I got to actually chat to him about his look.
And he had been wearing those glasses for a long time, so that was just his look.
So it had nothing to do with Dilbert, it was actually a coincidence.
And he offered me a job as a writer on his show.
Now, it turns out that writers are pretty well paid, but Dilbert had already taken off by that point.
So I didn't need a new job, so I politely declined.
But no, if you think that one copied the other, it actually was a coincidence.
So I can confirm that with certainty.
He's a good guy.
I like him.
And that, ladies and gentlemen, concludes the 420 presentation.
Scott, just seeing your 16,000 likes.
OK.
Juke Carrey is a libertarian, you say.
Oh, OK.
Yeah, don't bring your martial arts to a gunfight.
Yep.
So that's all I got for today.
Did I miss anything?
I think we can say it was the best show you've ever seen so far.
Duck, Duck, Go!
runs on Bing.
Terrific.
Oh yes, I'll be on, that is correct, I will be on Dr. Drew tonight.
6pm my time, so that's California time, so adjust appropriately.
Or something like that.
I think that's what it is.
Yeah.
So I'll see you there.
Maybe I'll see you there.
We'll tweet about it.
Ramaswami versus Don Lemon.
Yeah, I saw a little clip of Don Lemon and Ramaswamy trading words, but it sounded like they were just talking over each other and I'm not sure that that went anywhere.
No, there won't be a Trump RFK Jr.
ticket.
That's crazy.
That's crazy.
That's never gonna happen.
So did you see Tucker Carlson interview RFK Jr.?
That was a weird, sort of a weird moment in politics, wasn't it?
Because RFK Jr.
is a Democrat, running as a Democrat.
Famous Democrat family.
But because of his stance on vaccinations specifically, I think, Tucker has a lot of respect for him.
And I would agree that he has earned our respect and he should be taken quite seriously in the race.
Now, did any of you take note of his voice quality?
Improvement?
Did you think his voice quality was better?
Yeah, it looks like it improved.
And I think it's still improving, but I don't know.
Yeah, I believe it's still improving.
So we'll see what the upside on that is.
It might not be, I just don't know.
But I think it might be, because it feels like it was better than the last time I heard.
I want to send him a message to tell him there might be something he could do with voice production that he doesn't know about.
Specifically, getting your voice up into the mask of your face.
He might know how to do that, but it could be worth giving him a tip.
Because if you bring your voice production up here, I'll do it.
I will model for you what that looks like.
If I bring my voice production up to my face, I can actually feel my face vibrating, as opposed to when I get lazy and I talk down in my throat.
Now I'm talking in my throat.
Can you tell the difference?
It's almost guttural.
You can tell that my vocal cords and my throat are the main part of my production now.
But if I bring it up here, and the humming is how you do it.
You hum.
And now, because I hummed, that kind of allowed me to find how to bring the production up at the top of my face.
And you notice that I don't even talk nasally, and I'm not even boring.
Even though you say I am.
All right.
That's all for now, YouTube.
Thanks for joining.
Nice crowd.
Go forth and enjoy your 420 and subscribe if you can.
And if you'd like to see the Dilbert Reborn comic and Robots Read News and lots of other stuff, go to scottadams.locals.com for a subscription.
Export Selection