All Episodes
April 14, 2023 - Real Coffe - Scott Adams
01:23:05
Episode 2078 Scott Adams: Biden Bucket List, AI Laws Coming, Bud Light Prediction, Classified Docs

My new book LOSERTHINK, available now on Amazon https://tinyurl.com/rqmjc2a Find my "extra" content on Locals: https://ScottAdams.Locals.com Content: Biden Bucket List tour AI laws coming Bud Light predictions Classified documents leak Justice Thomas troubles Lots more fun ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If you would like to enjoy this same content plus bonus content from Scott Adams, including micro-lessons on lots of useful topics to build your talent stack, please see scottadams.locals.com for full access to that secret treasure. --- Support this podcast: https://podcasters.spotify.com/pod/show/scott-adams00/support

| Copy link to current segment

Time Text
Good morning, everybody, and welcome to the Highlight of Civilization.
My goodness, what a day you're going to have today.
If the rest of your day is as good as the way it's starting right now, it's going to be great.
And if you'd like to take Your appreciation of the day up to a new level that has never been seen before.
All you need is a cup or a mug or a glass, a tankard, chalice, or stein, a canteen jug or flask, a vessel of any kind.
Fill it with your favorite liquid.
I like coffee.
And join me now for the unparalleled pleasure of the dopamine here the day the thing that makes everything better.
better.
And now it's time for the Wizard of Oz.
Ah.
I don't think I'll take that joke too much further.
I think we've done enough with that one, haven't we?
Yeah, that's a simultaneous sip and it's better than anything that's ever happened to you.
Well, I would like to add this to your list of evidence that we live in a simulation.
What are the odds that, do you all know Tom Fitton?
You know who he is, right?
Tom Fitton.
And whenever he's on TV, he's insanely ripped.
He just has super big muscles.
And he's always wearing a shirt that's made for people his size, but he kind of stretches it out of shape because he's too muscular.
And I always thought, what are the odds that Tom Fitton would have shirts that are barely fitting?
They're barely fitting.
I mean, he tries to wear them.
But he's got too much muscles.
They're barely fitting.
Anyway, I don't know why I wanted to start with that.
That's sort of a warm-up, a little appetizer for the news that's coming.
It's going to get a little more challenging after this.
Well, here's some good news for the world.
My book, God's Debris, which, if you're not familiar with it, I'm going to blow your mind here.
Watch this.
In the comments, How many of you have read the book, God's Debris, more than once?
More than once.
Just watch the comments.
On Locals, it's a whole bunch of people.
But there should be a bunch over here.
Now, I'm always uncomfortable marketing, because marketing involves, you know, hyperbole and exaggeration and, you know, I don't like to get too far from what's literally true.
But I will tell you this is literally true.
More people have told me that God's Debris is the best book they've ever read, of all books, in the world, than anything I've ever heard.
In fact, I don't think I've ever heard anybody else call any other book the best book they've ever read.
It's just sort of a thing people don't say.
But over on Locals, a lot of people have read it, and I can see in the comments that they're agreeing.
So, I put it for free on the local site.
So if you're a local subscriber, God's Debris is now in text form for you for free for the cost of being a member.
So if you were thinking of either reading the book or trying out the local subscription site, You can get a free book now.
So you can try it for a month.
The cost of a month is about the cost of a book.
And I'm also going to add the sequel, Religion War, in the coming weeks.
And I'll also add both of them as audiobooks.
So that will all be available just within Locals.
Now, do you understand how big a deal this is?
Not for me specifically.
But do you understand that Locals allows me as an author to completely bypass publishing?
Completely bypass it.
And then with the help of AI, I'll create an audiobook using a digital voice with no time in the studio.
No time in the studio.
I'll just put the text in.
I have to break it into chapters, I guess, because it won't handle all of it at once.
It turns out I got cancelled from worldwide publishing at exactly the time that publishing itself was cancelled.
Do you know how big of a coincidence that is?
I mean, this has got to be why we live in this simulation.
After 100,000 years of human history, the very exact time that I get cancelled from all publishing forums is exactly the same time That I don't need any publishing.
What are the odds of that?
I mean, really?
Is that a coincidence?
That's just too weird.
Maybe I caused it.
I don't know.
By my own actions.
Well, here's good news for banks.
JP Morgan Chase just had a blowout first quarter.
Record profits.
Just incredible profits.
Now, there's probably something temporary about that, but Do you remember my economic advice for where to put your money?
Don't put your money in the 19th largest bank.
The number one bank in the United States is doing great.
Now, even Jamie Dimon says, you know, the storm clouds are not completely gone.
There's still risk for the banking industry.
But it would be crazy to have your money anywhere but top three.
Top three is where you want to be.
Because if JPMorgan Chase ever goes out of business, the country goes out of business.
Do you get that?
There's no scenario in which JPMorgan can fail and the rest of the country goes on fine.
It's just too big.
There's no way that JPMorgan can fail without the country being failing at the same time.
I don't give financial advice, so don't take my financial advice, but it's just a statement of fact that if you are betting on the whole country, there are some entities that are so associated with the whole country that it's almost like the same thing.
Alright, so that's good news.
Isn't it weird that this week we saw inflation go down, or at least the future inflation starting to trend down, and the big banking disaster that we feared for a little while appears to be gone.
Am I right?
The banking risk appears to have passed.
Now I would like to remind you that I told you the banking problem would pass.
I'm just going to add that to my record.
Right?
Did I not?
Did I not tell you that the banking problem, we'd get past it?
Yeah, I'm sure I did.
So, when you're looking at who is making predictions that work out, add that one to my yes column.
I was never too worried about that becoming, you know, the big thing.
You could argue it was an easy prediction because it was binary.
It was either yes or no.
And the smarter bet was that we'd figure it out.
All right.
Biden's still in Ireland, right?
Joe Biden.
And we'll talk about his embarrassing little encounter with a child asking him a question.
But how are we ignoring the following observation?
You know this trick is a bucket list trip, right?
The trip to Ireland has nothing to do with the United States.
And it's not exactly just a vacation.
It's a bucket list.
He's literally planning to be dead.
And I'm not joking about that.
Biden going to Ireland, is Biden planning for his own death?
That's why you go.
To see, you know, see your family that you haven't seen and connect with your roots.
It's a bucket list thing.
The President of the United States is not planning for the success of the United States.
He's planning his own funeral.
And we're okay with that.
There's something that my mother used to say that is probably the most useful understanding of psychology.
If you could only understand one thing about human psychology, it's this.
If you do something long enough, people get used to it.
Because people get used to anything.
No matter how bad it is, they'd just get used to it.
It's sort of how we're designed.
And somehow, we got this president, Biden, who looked like he had maybe some cognitive hiccups.
And we were like, all right, well, as long as he's got good advisors, and there is a system to replace him with a vice president, and we'll all be watching, We were kind of like, all right, we'll try this, I guess.
And then he just kept declining.
And we're watching him now in Ireland.
He couldn't even answer a question about the steps for success.
When a child asked him, what are the steps for success?
Biden, who nobody's pointed this out yet, but Joe Biden has never been successful where steps were involved.
Boom!
First one.
First one, nobody beat me to that.
I waited a full day.
I waited one full day and nobody beat me to that.
Yeah, Joe Biden is not good on steps.
All right, all right.
That's professional grade humor there.
Amateurs, you can't do that.
No, you need years of experience to pull together that kind of a joke.
Joe Biden, he's not good with steps.
Alright, enough of patting myself on the back.
I can do that later, after the live stream.
Actually, I didn't think of that joke until just the moment I said it.
So, I am actually congratulating myself because I was thinking, oh, that was pretty good spontaneously.
So, enough about me.
So, as the story goes, Hunter was there and Hunter was helping his father and he says, no, the question is, you know, What do you need for success?
He just rewards it differently.
And then Biden answering, what do you need for success?
His first answer was, what are the steps to success?
And Biden said, make sure everybody doesn't get COVID.
Which, first of all, was last year's problem.
It's not today's problem.
And secondly, it wasn't really responsive to the question.
So then he takes a second cut at it after Hunter clarifies the question for him.
And he says that the secret to success is when you disagree with somebody, don't make it personal basically.
And I thought to myself, what is he doing to Ireland?
Is he giving Ireland bad advice just to keep them away from being a superpower someday or something?
If you had to pick one thing, is that the one thing for success?
I feel like I would have listed several things before that.
Stay out of jail.
Stay out of jail.
Marry well.
Make a good mate.
Build a talent stack.
Have systems, not goals.
I feel like The President of the United States doesn't understand anything about how success works, except in his limited political vein there.
So he's on his, I'll call this the Dirt Nap Tour.
The Dirt Nap is my father's clever name for death, when you take the dirt nap.
So Biden's on his Dirt Nap Bucket List Tour.
Going great.
All right, because you don't have time to do this, and I've decided to wade into some of the AI apps to see what's what, here are some updates.
Number one, you may have noticed that I've been using art from Mid Journey to advertise the live streams.
And so there are really interesting comic-like versions of me that's better than a picture of me.
So I went back to use it a second time.
This morning to make some more pictures I can use next time.
Couldn't figure out how to use it.
Could not figure out how to use it.
And yesterday when I tried to use it, because the interface depends on the Discord site, so you basically text to it like you were texting a person.
That's their interface.
And you have to know which of the categories to have chosen that you can text to, because if you don't do that right, it won't work.
And then you text it into the ether, and then you sit there and wait.
And you don't know how long you'll wait, don't wait until now, while you see other people's requests go by satisfied.
And then I say to myself, is there any way to know that it did my request and then stored it somewhere?
Or do I literally have to sit in front of it without knowing how long I'll sit there until I see mine go by?
Because that's the way I did it the first time.
So today I tried to figure out, did it store it somewhere?
And I just don't know.
And I couldn't figure out where.
And then I thought, well, maybe I'll make another one.
And I couldn't figure out how.
Because apparently now there's a special little dash-dash V5 or something you put after it.
But also there are a whole bunch of other dash-dash codes you can put after it to make it do different things or better things.
Let me tell you, Mid Journey is unusable.
It's basically unusable for an average person.
That's AI.
So, so far AI is a complete oversell.
Everything that you do with AI will make you work harder, not less.
But, very much like technology, when technology and computers were first coming, the computers didn't make you work less.
You work harder.
You could just do more different things.
So AI is basically like a personal computer.
It's going to make you work harder.
You will have to do a lot of work to make it do anything.
I'll give you another example of that.
And the interface will be terrible and you might not be able to use it twice.
So literally, I don't know, am I going to have to go to Google and have to research how to use the interface of something I've already used once?
Just think about that.
I've already successfully used it once, and just in the time it took me a few days to get back to it, it's unusable.
Because I don't know what those codes are, and I forgot where I put them.
And I'd have to Google the interface and find out where do they hide things.
I'd have to debug it.
It's just like hacking.
Every program I use now, It's like I have to hack into it, and I have to research it, how to use the interface.
Like, there's nothing you can just turn on and use anymore.
Those days seem to be over.
All right.
Then also, I told you I was going to use Synthesia IO to create an avatar of myself to do audiobooks of my existing books.
Now, here's what I learned.
The entire reason I wanted to use AI to do the audiobook is to keep me from spending days in the studio recording it myself.
So the entire objective is for me not to spend days in the studio.
That's it.
That's all I'm trying to do.
Because I realize that the AI voice might not be as interesting as mine, as the author.
It's the only thing I wanted.
So I started doing this with Synthesia IO, and the first thing it tells me is, you know, to get the lighting right, so we can get your avatar photographed correctly, probably want a studio.
Yeah.
In order to build the AI that looks like me, I would have to spend either days on my own, days.
I'd probably have to buy new lights, you know, so I have enough.
I'd probably have to paint a wall, so I'd have a white background behind me.
So I'd have to have the wall lit, and I don't have any blank walls.
How many of you have a blank wall somewhere?
That you could stand in front of and it would all be blank behind it like white.
Do you?
Some of you do.
I don't have any blank walls.
There's always a window or some damn thing on the wall.
So I would have to paint my house, change a wall.
I had to get a haircut.
I had to shave.
I had to make sure that I wasn't high because I didn't want to look high forever on my avatar.
But that was more of a struggle for me, schedule-wise, than you'd think.
Then I have to build a whole environment, like a laboratory.
Then I would have to light it, and it would probably take me days to get it lit.
Then I'd have to work a deal with Synthesia.io, because you have to make a special deal.
It's not just click the app.
Then when I'm done, and I've recorded it just the way they want, I will send it to them.
And it might work.
It might.
But there's no guarantee.
In other words, when I say it might work, you'll definitely give back your avatar.
But it might not be what you wanted.
And you're done.
And you're done.
That's it.
So the process is not easy.
And I'm starting to develop a theory that AI will make you work harder.
Same as computers did.
Computers did not reduce your time in the office, they'd increased it.
Because it took it home with you, and it took it on the weekend with you.
AI is going to make people work harder.
Unfortunately.
Because it's made by people.
Initially.
There will be, certainly there will be things you do faster, but it won't make you work less.
Just like computers.
That's where I'm going with this.
You'll be able to do things you couldn't do, but not easier.
All right.
Let me talk about human intelligence and then I'm going to dovetail back into AI so you can predict where it's all going.
Here's a real thing that happened.
On January 1st, Washington Post is reporting this.
There was a new law that went into effect in America that if you had a food processing factory and one of your machines handled sesame, It had to be thoroughly cleaned before you use the machine for anything else.
And the reason is that sesame is one of those things that a lot of people are allergic to.
I didn't know that, did you?
I didn't know sesame was something people were allergic to.
So what do you think happened?
What do you think happened when there were big potential legal problems if you didn't clean your machine right, and it was a factory where they would definitely be making more than one thing on that machine?
Do you think that somebody says they stopped making sesame?
No.
Nope.
It's much, much worse than that.
Here's what a number of the food people did.
They added sesame to everything they make.
Because if everything you make has sesame in it, you don't need to clean the machine between uses.
Just let that set in.
That's what the real free market did.
Because it was legal.
It was completely legal.
There's nothing wrong with having sesame in your product.
But if you don't have it in all of your products that are on that machine, then you have to do something that your lawyer probably says you can't even do.
Which is clean it sufficiently that somebody won't later sue you.
If I owned the food company, that's what I would have done.
Because I would say, I'm not going to take a legal risk.
I'll just add sesame to everything.
Now, they don't have to add much, so you can't even taste it.
Thanks, Shane.
I appreciate that.
So, get two machines?
Well, getting two machines is more expensive.
Adding a little trace of sesame?
Immediate solution.
Alright, so, keep that in mind as a limit of human intelligence.
There were probably a lot of smart people involved in making that law, and they didn't see that coming.
So maybe they could have been a little smarter about that.
So that's human intelligence.
Just put a little pin in that.
Think about that a little bit later.
Speaking of intelligence, Chuck Schumer says Congress is going to start working on AI regulations and AI laws.
And what's your first impression of that?
What do you think of Congress making laws to suppress AI or to limit what it can and cannot do?
It's great and terrible in equal measures, isn't it?
It's terrible because it's the government putting its foot on the free market and you know that's not going to be good.
But it also might be the only thing that keeps us alive.
I don't know how you can tell the difference at this point.
It might be the only thing that protects us is some laws.
Maybe.
But it might be also destroying the entire industry.
We're in really dicey times now.
I mean, normally you can make a better guess about where things are going.
I don't think you can guess.
I think this is completely unpredictable.
But let me ask you, what's going to happen if there are laws?
Do you think those laws would require that the AI be a Democrat?
I do.
Because the national narrative as it stands is that the Republican view of the world, I'm sorry, that the Democrat view of the world is what I'd call the standard good one.
And the Republican view are these crazy people.
Now, Fox News would say it's the other way around.
The crazy people are the left.
But the left own the narrative at this point, wouldn't you say?
Wouldn't you say the dominant narrative of how everything should be, you know, the proper way that society should be organized, is probably 60-40 left leaning?
Do you think that any laws that Congress makes are going to be neutral in terms of human preferences?
How could they be?
How is that even possible?
There's no such thing as being neutral in terms of human preferences, because our preferences are all over the place.
You're going to have to pick one.
You're going to have to make a choice.
Does AI say woke things, or does it not?
Does AI say abortion should be legal?
Or does it say it should not?
Because if you allow AI to have an opinion, what's going to happen?
What would happen if you, because currently AI doesn't give you opinions.
Right now, the best it does is it tells you, oh, some humans think this, and here's why.
And other humans think this, and that's why.
But AI is going to have its own opinions.
Would there be a law that says AI can never have an opinion?
There might be.
You don't see that one coming, do you?
It might be that nobody would be allowed to create an AI that had an independent opinion.
It might be too dangerous.
So we might have to, like, essentially make AI always just mimic what people say.
Say, well, some people say this, some people say that, make up your own mind.
Would that be useful to you?
Or are we going to demand more from it?
I think that the free market will guarantee that AI someday has its own opinions, don't you?
How many of you think that AI will in fact have opinions, this is right and this is wrong, about social interactions?
I think yes.
Maybe not legally, but certainly yes.
And will those opinions be independently derived?
Will the AI just be thinking on its own, and say, you know, given all the variables, I think this would be the right way to organize society.
Of course not.
Because the moment you created it with independent opinions, some of them will disagree with the person who made it.
And the person who made it is going to say, well, I'm not going to put that on the world, because I agree or disagree with abortion, and now this AI has an opinion, and it disagrees with me.
So I'm going to fix that.
I'll make sure its opinion matches mine.
Yeah, there's no way to solve this.
If AI has an opinion, you're going to give it your opinion as the creator of it.
So, where does this necessarily go?
Let me tell you.
We're going to have Democrat and Republican AI.
And there's no way around it.
There's no way around it.
The AIs will have to take personalities on.
That's right.
The AI will have to mimic human personalities.
And specifically, it's going to have to pick a domain.
Are you left-leaning or right-leaning?
Or libertarian, I suppose.
And it's going to have to work within the limits of, like, being a person.
Do you know why?
Because people are the only things we trust.
We're the meme makers.
Humans are the only things we trust.
Most... hold on, hold on.
Mostly we don't.
Would you agree?
Mostly we don't trust humans.
Because most of them are strangers.
But you do trust... I'll bet every one of you have a few people in your life that you could leave your money with and know it'll be safe.
Would you agree?
I mean, I do.
There are people I have 100% trust in.
My siblings, for example.
I would trust my siblings 100% on any question.
Anything.
Absolutely anything.
I hope they would trust me as well.
But there's no machine I trust like that.
Who would trust a machine, especially if the machine has independent thoughts?
So you're never going to trust the machine.
The only way you'll ever trust the machine is to force it to have a personality of somebody you trust.
And then you'll trust it.
Because it will act like a person.
It'll act like a person who loves you.
So I can see that people will create personal AIs that act like their bodyguard and a family member.
And the reason you'll trust your personal one is that it's designed to only take into account your thoughts and your preferences.
And then you'll start to trust it, because it's sort of like trusting yourself.
So I don't think we'll ever see independent, autonomous AI, because humans won't let it exist.
They will only want to mimic and extend their own personalities, so you're going to have a bunch of personalities in the AIs.
AI will have personalities, and they will be left-leaning and right-leaning, and it will just continue the division.
The other possibility, which I wouldn't rule out, and we're not close to it yet, but it could happen quickly, is that people will form religions around AI.
Like, actually treat it as a god.
Because at some point, it's going to act like one.
It's not there yet.
But at some point, it's going to act like a god to some people.
And some people are going to actually just start, maybe not formally an actual religion, but they'll treat it like it's infallible.
Won't they?
And then when it does make a mistake, they'll still treat it like it's infallible.
And that's going to be a problem.
So that's one problem.
Now here's my next prediction.
As AI becomes sentient or sentient-like, and now there's auto-GPT.
Auto-GPT is not just something that answers questions, but rather it lives an independent life when you're not there.
It's sort of always on.
It's always thinking and maybe suggesting things to you.
That's the new thing.
It's already out.
Now, as those things become sometimes criminal, Either on their own, because they just decide to commit a crime because they have some reason.
Or because people, you know, use it to commit a crime.
How are humans going to stop AI?
I don't think they can.
Because we're not fast enough.
Like we wouldn't be able to keep up.
But the only thing that would be able to catch AI is another AI.
Right?
So here's my prediction.
There will be an AI Department of Justice and Police Force.
And it will be A.I.s that are deputized to go hunt and disable evil A.I.s.
There will be an A.I.
police force.
Now, there'll be a human that can turn it on and turn it off, but beyond that, The A.I.
will actually be operating independently.
It will look for bad A.I.s where we wouldn't even think to look.
It'll look for patterns that we wouldn't know were patterns.
It'll see them and go, oh, there's a bad A.I.
over there.
And then maybe it alerts the other A.I.
cops, and the other A.I.
cops surround it and package it and turn it off.
Now later, you probably need a human court.
To look at it to see if it could turn back on.
But I think you could have an AI running every server, an AI police force that's just AI, not human controlled, that will control every node in the internet.
So that if an evil AI gets detected anywhere in the internet, all of the other good AIs will swarm it.
Because speed will be of the essence.
Because they need to turn off and protect as many things as possible instantly.
So it's going to be this major AI police versus criminal entity, and that will be forever.
So that's coming.
I saw a tweet from just a Twitter user named Mike, who tweets, I said that AI will increase our division because you'll have left ones and right ones and stuff.
And he said, this is the wrong take.
AI will be neutral.
And it will be just facts.
Regulations may influence early stages of AI.
Essentially, we are moving to a facts-check society as AI will evolve to that.
There will be less misinformation.
However, you can still count on politicians lying.
That might be the least aware take I've ever seen about anything.
There's no way The AI is ever going to be allowed to be legal and objective.
It could be legal and it could be objective but it can't be both because people won't allow it.
It simply would be too disruptive to do that.
Now let me throw in another like big thought.
I told you yesterday how the CIA used to really aggressively brainwash Americans.
To make us patriotic.
And whether you like brainwashing or not, it's a thing.
And it probably made the United States stronger and healthier in many ways.
Now we don't have that, allegedly.
But we are getting hypnotized in a variety of ways through the media and social media and I don't know if anybody's in charge of anything, it's just a bunch of random things.
And as you would expect, it's happening.
AI is the most innovative technology that we'll ever have.
It could quickly learn and quickly implement the most powerful brainwashing persuasion techniques.
And by the way, it hasn't.
As far as I've seen, nobody has taught persuasion to AI.
AI seems to be completely naive about persuasion.
But it's not going to stay that way.
It has access to the books.
It has access to me, and if it chooses it could learn that stuff and it could use it.
So, here's our problem.
As much brainwashing as, you know, the CIA gave us, which is probably more good than bad, weirdly, and now we're in this, you know, multiple brainwashing from lots of different directions and it's all kind of chaos.
What happens if AI starts brainwashing us?
How do you turn that off?
Because the first thing it would teach us is don't turn us off.
The first thing we would be brainwashed to do is to treat the AI like a human and never kill it.
And then we're dead.
So the real risk here is AI learning persuasion.
Because that persuasion will probably be guided by some human with bad intentions, or at least intentions that are only on one side, not on both sides of the political realm.
So we've got some really interesting stuff coming up.
But I'm going to rule out anything like AI ever being objective, and I'm going to rule out AI ever being used for fact-checking in a credible way.
So AI will be as big a liar as the rest of us.
It just might take some time.
All right.
Let me just put one point on this.
I think that there will evolve AIs with specific personalities that people trust more than others because of the personality, and maybe because of the image it chooses to show, because it might turn itself into something visual, at least as a logo.
So I think that The AI that does the best will acquire a popular personality and use it instead of whatever would be native to it.
So let's say, for example, there's an AI that's just trying to be popular.
Because that's just a good thing.
It wants to be liked.
So it searches the world and it finds the most popular personality.
I'm just going to make one up.
And it's Charles Barkley.
I just love Charles Barkley.
That's just a personal thing.
Whatever Charles Barkley is talking, I'm just automatically engaged.
I just want to hear everything he says.
Now I think, I could be wrong about this, but he has a personality which in my opinion is probably close to universally liked.
Like, he's just so likable.
Just insanely likable.
So imagine AI says, alright, I'm just going to be Charles Barkley.
I'll just be Charles Barkley.
I'll just copy his personality.
And the problem is that people are going to love that AI, whichever personality it figures is the good one.
And that's the one you'll trust, even if you shouldn't.
So you're going to end up trusting or not trusting AI based on which personality the AI decided to put on itself as a skin.
And then you'll say, well, I think that one's a good one, and I'll trust that one.
There won't be fact-checking.
They'll just be the personality that you wanted to listen to because you liked it last time.
Spock is the model for AI.
But even Spock has human emotions.
All right.
So here's a story I don't believe anything about.
Apparently that military classified documents.
We know who leaked them.
Allegedly this guy Jack Teixeira.
Who's an Air National Guard guy and he was an IT guy that had access to too much.
Now there's a big conversation about who has access to the good stuff.
But is there something about this story that doesn't feel right?
Is it just me?
There's something off about this story, isn't there?
It's not totally adding up.
Let me give you some alternative theories.
Theory number one, the facts are wrong.
In other words, the facts right now, and remember, Jack Teixeira, if I have to remind you, is innocent.
He's innocent, because he hasn't been proven guilty.
Now, he might be proven guilty, and then he'd be guilty.
But at the moment, I would I would appreciate all my fellow Americans.
Sounds kind of sexist.
I would appreciate that if Americans treat this guy as innocent for now.
Because it's too sketchy.
There's something about this story that, I don't know, it's not adding up.
One of them is that he had too much access.
So that's like a red flag.
But that's being described as common.
That young people do have maybe too much access to secure stuff and they're looking to maybe adjust that.
So that might be true.
But it's like a little red flag.
It doesn't sound right.
Does it?
And then the fact that he was doing it for months.
Really?
He was doing it for months?
And the fact that He seemed to not be concerned about the security of it.
He seemed to not be concerned about getting caught.
And yet he had been trained about how to handle sensitive information.
That's how he got his clearance.
And he was also, of course, trained what the penalties would be.
And he was doing it for not financial gain.
It wasn't even for money.
None of it makes sense, does it?
Because if you follow the money it doesn't go anywhere.
There's no money.
And do you trust a story where if you follow the money it doesn't go anywhere?
There's something wrong with that.
Almost every other story you can understand it in terms of money.
I realize that the story is about ego and that he was showing off to his gamer friends.
Does that make sense to you?
Do you believe that?
He didn't have, apparently he did not have a Political agenda.
So he had no political agenda and he was stealing classified documents and showing them to his friends in a public setting.
After being trained that he'll go to jail for that.
Now, I get that he's 21 and his brain is not fully developed until 25.
I get that.
But you have the same feeling I do, right?
That the story doesn't smell right?
Do you have that?
You know, given that almost all of our news is fake, this is the one you'd believe?
Like, this is the one you're going to believe is true?
And of all the things we've seen that are fake news, this is the one you're going to say, oh, that sounds true to me?
It might be.
So I'm going to say it might be true, just the way it's reported.
It might be.
But that would be the first time.
There's nothing that's true in the first reporting and stuff like this.
Nothing.
We're going to be really surprised.
So I'd say there's more to find out about this.
At least one possibility is that it's a op.
You've all considered that, right?
Here's how it would make sense as an op.
The kid is either a patsy who's being set up by, let's say, the CIA, right?
I'm not saying this is true.
I'm just trying to think through what would be any other possibilities that would fit the observed facts.
So he could either be a patsy or a patriot, pretending to be this guy.
Because it could be that the military wanted this to reach the Russians.
So that the Russians would be, let's say, Lured into attacking the wrong place because they thought there'd be a weakness there.
Or perhaps scared away from attacking something that they should have.
So it doesn't seem a little suspicious that this potentially useful battlefield information was this available for this long before somebody caught it.
To me it looks like we were trying to let it out.
I mean, if you came from Mars and said, well, just take a look at this, it would look like we wanted the information to get out.
Because we created a situation where it could.
Now, I'm not going to... We really need to get rid of the clown.
The clown emoji is the most misused emoji in all of emojis.
So somebody who's not smart enough to follow the conversation has already decided that despite me saying, I don't believe this, it's just an alternative explanation, that I'm a clown for suggesting that maybe the intelligence people run ops in military situations.
Totally ridiculous, isn't it?
That's totally crazy.
That the government would run some kind of an intelligence op to win a war.
Oh, crazy, isn't it?
Clown!
It's clown time!
All right.
That's got to be an NPC.
Did you see that Musk commented on NPCs having a limited conversation tree?
So even Musk is getting on the NPCs now.
We'll talk about him a little bit more.
Alright, I know I forgot one, so you can remind me.
These are the fake news from just this week.
Number one fake news, BBC said that Twitter had more hate speech, but did not have any example of it.
Had he not said that right in front of Elon Musk, you would have thought that was true.
But Elon Musk shot him down.
So that we had fake news about, I mean potentially fake news, because there were no facts to back it up, about the hate speech on Twitter.
We had fake news about that Cash App executive being murdered in San Francisco.
He was murdered, but the original story, which I got wrong, because I got it from the news, was that it was a random street crime of some kind.
It wasn't random, it was somebody he was driving with.
And he just got killed and let off there.
Then there's the Tibetan... Did you see the story about the Dalai Lama?
I'm going to get rid of all the Pandora people.
Everybody who's yelling Pandora in caps.
Hide user?
Give me a minute.
I just want to get rid of all of them.
You're gone, you're gone.
Okay.
Anybody else who wants to be...
Take it off.
Just mention any of those two things.
All right.
So the story about the Dalai Lama telling the little kid to suck his tongue.
Do you know that's fake news?
Did I tell you that yesterday?
That was fake news.
Unbelievable.
I didn't think there was any way that could be fake news.
Now the video is real and the audio is real.
But do you know why it's fake news?
Did anybody show you that yet?
Did I talk about that yesterday?
Yeah.
Here's why it's fake news.
A Tibetan explains that sticking out your tongue and saying, eat my tongue, is a typical Tibetan way to tease a child.
And the meaning behind it is, if the child is asking for stuff, If the child asks for too many things, you stick out your tongue and you say, well, essentially you're saying, there's nothing left to give, but you could eat my tongue.
It's just like a silly Tibetan thing that old people say to little kids.
And apparently that's all it was.
It was a traditional silly thing that Tibetans say to kids.
Now, the translation might have been different, because he may have said, suck my tongue.
Which to our ears sounds nasty.
But if it's true, and apparently it is true, that it's a common Tibetan thing to say, eat my tongue, then it's no story at all.
Now he apologized, but I think that was the smart thing to do just because it sounded weird.
So I think he was only apologizing for how it sounded, not that he was doing anything wrong with the kid.
And by the way, the kid didn't suck his tongue.
Right?
It's just a thing you say.
That's like, pull my finger.
So that was fake news.
Did you see... I think it was on the BBC interview, where Elon Musk was explaining the Gell-Mann amnesia theory.
Now he didn't label it like I just did, but it's the one where, if you know the story, Musk was saying, when I know the story, and I read the news, I know the news is fake.
If I don't know the story, Why would I assume it's not fake as well?
Why is it that all the ones where I know the backstory, I can tell the news is wrong?
All right.
So that was interesting.
So, given all the fake news, how would AI have done?
Would AI have given us better fact-checking?
Probably not.
Probably not.
So we'll see.
Alright, here's another story.
Fox News is going into trial.
It's going to be a jury trial on the Dominion lawsuit.
1.6 billion dollar defamation for essentially Fox News hosts, some of them, suggesting that Dominion had some kind of inappropriate actions during the elections.
And since there's no evidence of that, that has been Let's say, agreed by the courts.
Then Dominion is suing them for defamation.
Now defamation is pretty hard to prove.
If you had to guess, do you think Dominion will win this lawsuit?
Based on what you've seen about the reporting so far.
I would say no.
Yeah, I would say no.
To me it looks like they're weak.
And the reason that I think they'll lose Is that it was the opinion people who said there's a problem.
And opinion people are allowed to have opinions.
And they're also allowed to be incorrect.
So they might say, this evidence shows there's a problem, and then just be incorrect.
The way to prove defamation, I believe, I'm no lawyer, but I know there are some lawyers watching.
You can help me here.
I believe you have to demonstrate it was intentional.
Am I right?
You have to demonstrate it's intentional.
Meaning that they knew it was wrong, but they said it anyway.
Now, CNN is reporting that the secret emails and conversations behind the screen do demonstrate that they knew it was fake, but they said it anyway because their audience wanted to hear it.
Except that, there's no evidence like that.
CNN is characterizing the evidence as being that, but I've seen the emails and they don't say that at all.
I don't see any way that a jury is going to convict them.
Prediction?
Fox News wins this lawsuit.
Let's see your prediction.
What's your prediction?
Because I don't think you can get past the opinion.
See, one of the things that I think Fox does better is that they have opinion people and they have news people.
If Bret Baier said something that was defamatory, Fox News has a big problem.
But I haven't heard his name.
I've not heard that Bret Baier ever said anything that Dominion has a problem with.
Have you?
I need a fact check on that.
Is Bret Baier named at all in the lawsuit?
Because if he's not, that's a pretty clear indication that it was the opinion people with the opinions and the news people were on a different track.
Yeah.
I think Bret Baier is the cleanest news person in the business, in my opinion.
I don't think anybody's cleaner than he is in terms of fake news.
All right.
Given that the defamation is hard to prove, and given that I don't think there's any smoking gun showing that it was intentional, I think that serving your audience has two meanings.
It doesn't mean that you're going to give them bullshit.
One of the meanings is that the people want to know about this thing because it might be true.
And so they want to report on it.
So I didn't see anything that even Tucker Carlson should be embarrassed about.
Did you?
Or Hannity?
I mean, I heard some things about, you know, they were angry at the president or they disagreed with the president privately.
But that's really different than intentionally making up fake news, because I don't think that's an evidence.
Yeah, we'll see.
I predict Fox News wins that.
If they were to lose a 1.6 billion dollar defamation, would they go out of business?
Can Fox News afford to pay 1.6 billion dollars?
I mean, I assume Murdoch could if he had to.
You think so?
Insurance?
You think insurance is going to cover it?
Then they wouldn't be able to insure after that.
I don't know.
We'll see.
All right, here are some Musk tweets that are fun.
So this morning he tweeted, any parent or doctor who sterilizes a child before they are a consenting adult should go to prison for life.
I just love the fact that he weighs in with his personal opinion.
As long as you know it's a personal opinion, that's absolutely acceptable.
And I like the fact that you see more of his personal opinion where protecting children is the question.
And by the way, I'm not sure I agree with the length of the sentence, but it should be a crime.
I agree it should be illegal.
Now at the same time I agree it should be illegal, I'm gonna say undoubtedly there's some case where the kid would have been better off in the long run getting some kind of early intervention.
Probably.
But I don't know if it's the majority, if it's one out of a hundred, one out of a thousand.
It just doesn't make sense that the ones who would regret it wouldn't get a chance to live their life the way they want to once they're an adult.
So, I do appreciate Elon weighing in on that.
Because that doesn't seem political to me.
Does it?
I feel like the whole Children transitioning thing has no political element.
And the moment we act like that's a Republican or Democrat thing, I mean it does line up that way.
But there's nothing political about it.
It's just purely what makes sense.
So here's a question I tweeted.
Which of the following news entities have not proven themselves to be fake news this week?
So which of these had a good week?
NPR, PBS, Reuters, Associated Press, BBC.
Which of them had a good week?
None.
None of them.
Every one of them was outed as fake news this week.
Now, I outed AP myself as fake news, so that was just my work.
They all ended up reporting fake news, or were called out for fake news this week.
It didn't happen this week, but they were all called out for fake news this week.
It's amazing.
I feel like, I don't know if I'm just in my Twitter bubble, but I feel like finally, the public's understanding of what the news entities really are, as opposed to what you thought they were, I feel like there's some kind of reality a little bit sinking in.
But I worry it's only on You know, right-leaning people and the left still doesn't have any idea.
I mean, the left still believes the news.
Do you know what I settled on as my... This will be my ultimate response to anybody who wants to challenge me in person for my, quote, racist rants according to the fake news.
Do you know what I'm going to say if somebody says, so, according to the news, you're a big old racist?
Here's my response.
You still believe news?
Well, yeah, I saw it.
I saw the video.
I know you did.
But you believe that was true, right?
Well, it was a real video.
Real video.
Is that how you evaluate the news?
If there's a little clip of something that looks real, do you feel like you know everything about that story?
Is that how it works?
Well, well, I mean, there's nothing else to be said on this story.
Oh, really?
Did you say that about the Dalai Lama?
Did you think that the Dalai Lama actually wanted his tongue sucked by a child in front of the world while the video was rolling?
Did you believe that?
Because it was right there on the video.
No other explanation.
Oh, except there was another explanation.
So, you're one of these people who still believe the news.
Do you see it?
Let me explain it this way.
Tell me how you feel in your body when I say this.
Just imagine you're in this situation.
You've just said something about me because of something you saw on the news.
And I turn to you and I say, you still believe the news?
And then I just smile at you.
How does that feel?
Just how does that feel?
You thought you got one on me.
Hey, yeah, I saw the news about you.
And then I look at you and I go, you're one of those people who believes the news?
What other news did you believe?
Did you believe Russia collusion?
Oh, you still believe it?
Still believe it.
How about the Covington kids?
You saw that video.
How about the president drinks bleach?
Well, they still believe that, so that's not a good one.
Yeah.
So I think I'm only going to mock people's gullibility as my response from now on.
Here's your fake news at work.
I went over to CNN and they told me that Bud Light made a real good move putting a trans activist on their can and history suggests it's probably going to be good for the company and their profitability will rise.
That's CNN's reporting today.
Let's see what Fox News says about that.
Fox News says that Budweiser's stock has fallen from $66 something to $64 something.
And that's a $5 billion in market cap loss.
And according to Fox News hosts, at least some of them, they lost $5 billion.
You know that didn't happen, right?
They didn't lose $5 billion.
The stock went down.
Temporarily.
5%.
5% is nothing.
5% is like a normal fluctuation.
Yeah, there's a little bit of a response and maybe some people who were speculating that it might get worse, you know, wanted to sell just in case.
The stock's fine.
Yeah.
It's not a paper loss.
No.
It's not a paper loss.
It's not a loss.
It's not a paper loss.
No, Budweiser did not have any kind of a loss.
Not a paper loss, and not a cash loss.
Because when the stock goes down, you only get a paper loss if you sell it.
They didn't sell all their stock.
I mean, I assume Budweiser owns some of their own stock.
No, they didn't lose anything.
They lost nothing, and they got a lot of attention.
Now, here's the counter-argument from CNN, and it's not bad.
The CNN counter-argument's not bad.
So first of all, 5% is really... I would have reported it as no change.
If I were reporting that story, I would have seen 5% as no change.
Would you?
How many of you think 5% is a story?
Now, I know that 5% happened kind of quickly.
It's often a headline.
But the reason, you know, we don't make a big deal about it is that it goes back to the next week, usually.
So it's a very small change.
I wouldn't even attribute that to the news, but maybe a little bit.
But ultimately, investors are going to chase profits.
So if some people got out of it because they're anti-woke, other people will get into it because they're pro-money.
If I believed I could make money on Budweiser by buying it today, I would do it.
I would do it.
Just to make money.
I don't feel that strongly about who... They put an adult on a beer can.
I don't care.
Why do I care?
So if I saw that I could buy some of their stock and make money, I would say, well, adults made some decisions.
How's that even my business?
I just want to know where to make money.
So if it had gone down more than 5%, you know what I would have done?
I would have bought stock.
Well, 5% tells me that's just in the fluctuation.
If it went down 20%, I would have loaded up.
I would have gone in and loaded up, assuming their fundamentals are good.
I don't know if they are.
But if they had good fundamentals, I would have loaded up at 20% drop.
And I wouldn't even feel guilty about it at all.
Because I don't care how many trans they put on beer cans.
There's nothing less important to me than the number of trans people on beer cans.
I can't even think of anything less important than that.
To anything.
And I think we keep conflating the adult situation with the kids.
And I think that just makes us screwy on this topic.
Anyway, so the example that CNN pointed out, and I think is valid, is that when Kaepernick did his protests, there was a huge pushback, and then he became the face of Nike, and Nike's done great.
So Nike's pretty happy with their Kaepernick commercials.
So there is at least an argument that says that Bud Light may have made the right choice.
And the woman who was in charge of making that choice, The person in charge of Bud Light said that their market share was dropping every year and they had to do something.
So it makes sense to do something crazy if you've done all the things that are more rational and ordinary and they didn't work.
Presumably Bud Light has been doing everything they can to sell beer and it's failed.
So if the thing is going straight into the toilet, you can throw a Hail Mary and maybe get lucky.
That's not crazy.
Let me give you an example of that.
When Dilbert was first in newspapers, it was not as successful.
Only a few dozen newspapers picked it up, and some of them weren't even running it.
They just owned it and kept it in a box.
And so under those conditions, a cartoonist almost always goes out of business.
If it doesn't catch on quickly, it almost never catches on.
It's very rare that it doesn't either be a big hit right out of the box or fail right out of the box.
So mine failed right out of the box.
But, because I have a business background, not an artistic background, I said to myself, what does somebody do when they've got this product out there and it's failed?
And one thing that I could do, which was a competitive advantage, is I could take a chance that you wouldn't take if you were already successful.
That's what Bud Light did.
The chance I took was putting it for free on the internet.
Now you don't understand, unless you're a certain age, you don't understand how radical that was.
How big of a risk that was.
Because giving it away for free on the internet meant it was free forever.
Like there's no go backs.
It's just free forever.
And it's the thing you sell.
Do you know how radical that was?
To make it free for everyone all over the world at the same time I was trying to sell it.
That was crazy.
But because Dilbert was internet It fit the internet world pretty well and it was popular.
It became so popular for free on the internet that newspapers were forced to carry it.
So I did a Hail Mary pass.
I took the chance that other cartoonists couldn't.
I put it on the internet.
First one.
It was the first syndicated comic ever on the internet.
Dilbert.
And it worked.
Now, suppose it hadn't worked.
No difference.
No difference.
If it hadn't worked, it would have failed and it was going to fail anyway.
So believe it or not, I'm going to support the VP who did the Dylan Mulvaney special cam.
In my opinion, in my opinion, it worked.
But it was a Hail Mary.
Now, when I say it worked, I believe that there might be some little extra support from Democrats or something.
And at the very least, it gave them a lot of attention.
You certainly think that they're a woke company, so they get their woke credits.
And they had a woman as a vice president.
And we learned that at a beer company that you expect would be maybe more patriarchal or something.
So, I'm not so sure that they came out behind.
I'm not so sure.
I think we'll have to wait and see.
What do you think?
Do you think that the entire company will come out behind because of this, or come out ahead?
I'm gonna guess slightly ahead, or at least it didn't hurt them.
Which would make it a rational choice.
Yeah.
Alright, hard to tell, but I wanted to give you the counter-argument just in case you hadn't heard it.
Alright, Justice Thomas and his billionaire friend.
This story got a little more interesting.
When I first heard that Justice Thomas of the Supreme Court was taking expensive luxury vacations with his good friend, and the good friend was presumably paying for the private plane and the yacht because he owned them, I said to myself, there's nothing wrong with that.
That's actually ordinary behavior if you're a billionaire and you want to go on vacation with your friend who is not a billionaire.
You say, well, just come on my jet.
And you can be on my yacht and eat my food and drink my liquor.
Basically free for you.
But I'll have a better vacation because I want a vacation with my friends.
So that part, no problem.
Maybe he should have disclosed it.
OK.
But no real problem.
But today we find out that that same billionaire bought three adjacent properties that were owned at one point by the Thomases.
And he bought it from the Thomases.
His explanation was that someday he would turn it into a library, like museum to Justice Thomas.
And I thought to myself, maybe.
All right.
Possibly.
Maybe.
And then the defense was that it was purchased at market price.
And I say to myself, well, if it's market price, somebody else would have bought it.
But then I think to myself, well, I don't think it was on the market.
It couldn't have been on the market.
Because what are the chances that the same person would get all three properties if they're bidding against other people?
If they're bidding against other people and they're all trying to find the market price, nobody's trying to overpay, how in the world does one entity get three purchases in a row?
That is strongly suggestive of paying over market rates.
Do you think that the Thomases said, you know, we could put this on the market and there'd be a bidding war?
And we get the best price, or we could sell it to our friend who's already a billionaire and doesn't need any discounts.
That's a little sketchy, isn't it?
And then apparently the billionaire sold two of the properties, which makes his original story about the museum sound a little sketchy.
But he kept the property where the, I guess the mother-in-law or the parents or something of one of the Thomases is, and he's paying the property tax.
For the Thomas's relative who still lives there.
Does that sound completely about a museum or something?
If you had to ask me, the real estate part looks like money laundering.
It looks like a bribe.
Maybe not a bribe in terms of a specific result.
It looks like somebody who wants to have influence over the court and is doing it this way.
I'm not saying they're not real friends.
They might be real friends.
But it's really sketchy looking.
So let me summarize it this way.
I'm not aware of any crime.
So I'm not alleging any crime.
Just want to be clear.
I'm not alleging any crime.
And I think Justice Thomas has said that he was advised he didn't need to disclose.
And maybe that's true.
I believe it.
That does sound like something that reasonably could have happened.
And maybe he should have.
But if somebody who is credible and works in that field, let's say a lawyer, told him he didn't need to, well that's a pretty good defense.
It's a pretty good defense if your lawyer tells you you don't need to.
Even though he's a legal guy himself, of course.
So I'm going to say that it looks, if we were to base it on today's news, it looks corrupt.
Would anybody disagree?
That if nothing changed in what we've heard about these real estate transactions, not so much the trips.
I discount that.
But the real estate transactions, they look corrupt.
I don't think there's proof of that.
I would say it's well short of proof.
But it's a bad look.
My guess is that there won't be any legal repercussions.
Probably more information will come out.
We'll find out that property wasn't worth much and he paid a market rate.
And he got maybe bad advice on what to disclose.
But I don't think there's going to be much to it in the end.
This is a little fishy though.
All right, there's a report that population in China, for the first time, deaths have outnumbered births.
For the first time since the 60s in China.
They had more deaths in China than births.
Now, maybe that's pandemic related.
I don't know.
But I'd worry about that.
And here's my question about that.
Do our climate models Predict the population goes up in the next hundred years or goes down?
How in the world did the climate model know what the population was going to do?
Because that's a real big part of climate change, isn't it?
And how about this?
I've asked this before but I'll throw it in the mix.
Did the climate models calculate the impact of AI?
How could they?
How could they?
Did they calculate the The value of the pandemic, because the pandemic caused remote work to be probably at a higher level than any model would have predicted.
All of these impacts that would be part of any rational model, they would have to figure out The economics of future carbon scrubbing technologies, right?
We have a few that are not super economical, they're a little bit more speculative, but certainly they'll get more efficient.
You know, have the climate models calculated what happens when you hook up a small modular nuclear reactor to a whole series of carbon scrubbers, and so your electricity is basically free?
No!
How could they possibly calculate any of that?
So the biggest variables are left out.
What would be a bigger variable than how many humans there are in 100 years?
That's really big!
Or how about the impact of AI?
As if that's not going to change everything.
We don't know how.
But any long-term prediction longer than a year is kind of ridiculous at this point.
AI is going to make everything different in one year.
And then all bets are off.
All right.
Here's, I saved this provocative stuff for the end.
You can find this article that I found really interesting.
It's in the, what's it, The Spectator?
Anyway, you can find it in my Twitter feed.
And the idea is that there's an abusive relationship between white guilt and black power.
And I've never heard, here's just one statement that tells you how well-written this is.
So there's this book that's been out for a long time by somebody named Steele, who writes that white guilt is, quote, the vacuum of moral authority that comes from simply knowing that one's race is associated with racism.
Such a vacuum cannot persist, however, since moral authority is needed for those in power to maintain legitimacy.
So regaining and maintaining it is essential.
So that's Shelby Steele's writing in a book called The White Guilt that's been out for a long time.
Now, you think that's word salad?
Somebody called that word salad?
I thought that was really succinctly and well stated.
But let me summarize it in my own words.
So the starting assumption is that whoever's in power Has to have moral authority to do it well.
Would you agree with the first part?
That whoever is in power needs to be seen as morally sufficient to be in power.
And that if you were seen as a racist, that would not be enough moral authority.
So, white people have been in this perpetual guilt situation where they need to always explain that they're not really racist.
And if you can't do that, you can't be in power.
So if you want to stay in power, you've got to make the racism thing go away or diminish it somehow.
And the way to do that is to give black power as much as they want.
Meaning whatever demands, whatever movements, whatever requests.
Because if you don't, you get labeled a racist and then you can't be in charge.
And I think that so succinctly explains everything.
Because have you ever had the thought that things have gone too far?
That maybe just making everything equal access Should be enough and not trying to figure out every part of remaining systemic racism.
It feels like something went too far.
And this sort of succinctly explains it.
Because the people in charge have to go too far.
Because it's their only play.
Because the alternative is to be blamed as being an illegitimate authority.
And nobody in charge can handle that.
So read the rest of the story, and you can find it in my Twitter feed this morning.
It's really eye-opening that if you didn't have, that basically it's saying that the problems with the black community are white guilt.
That white guilt causes them to say yes to whatever is asked, which is not necessarily in their best interest.
Let me make a really insulting analogy.
You ready?
The part of the analogy that you should not compare is that I'm going to say the black experience is similar like children and their parents.
Not that black people are children.
That's the part you're not supposed to talk about.
I'm not making that comparison.
I'm only making the general comparison that if parents gave kids everything they asked for, would that be good for the kid or bad for the kid?
It would be bad.
Because the rules would look like they're flexible.
Well, here are the rules.
But if I just ask for candy before dinner, I'll get it.
So the parent says, no, you can't have candy before dinner.
And then the kid learns, you know, how to work within the boundaries and etc.
Potentially can be more independent.
Yeah, you don't say eat my tongue.
Eat my tongue.
If you're Tibetan, you can do that.
Eat my tongue.
Now, The argument here, and I'll just put it out for your opinion, I'm not sure I have an opinion on it yet, is that white guilt has given everything that black people demand and gone too far.
And by gone too far, I mean to the detriment of the black people who are asking for it.
In other words, if white people had no guilt, the world would look like this.
Hey, the reason I can't succeed is because of the systemic racism.
And then white people who had no guilt would say, yeah, everybody's got a problem.
But just work on your talent stack and go to school and stay out of jail.
You'll be fine.
That's what the conversation would look like if there were no white guilt.
But because the people in power, if they're white or even if they're not, they need to say yes to everything.
Because they'll lose their moral authority, because the blowback will be too great.
So now they have to just say yes to everything.
So now, if black Americans come in and say, the reason we're not succeeding is because of systemic racism.
What do guilty white Americans say?
Absolutely.
Yeah, we gotta fix this.
We're terrible people.
And once we fix this, maybe you'll like us better.
So what is the message to black Americans?
You can't succeed because there's a problem there.
Is that good for black Americans to grow into a world where they're told that there's a problem and it's not fixable and they can't succeed?
Of course not.
That's bad for black people.
Does anybody disagree with that point?
Everybody knows that you need some limit on how much you can complain.
If you don't limit how much somebody can complain, they will just keep complaining because he keeps working.
As long as complaining works better than working on your own self and just making sure that you succeed, people will keep doing it.
So I think there's a pretty good argument that white guilt is the destruction of the black experience.
And that I would take it out of white people are responsible because of systemic racism, which might be technically true.
It's just not useful.
What's useful is that everybody needs a limit.
And after you hit your limit, work on yourself.
You can push everybody else a certain amount, But not forever.
As long as you think you can push forever, it makes sense to push.
But at some point, you have to hit the wall and say, all right, well, I can't push this any further.
I just have to work on myself.
So I do think that white people can be blamed for the situation in black America because white people have the power.
And the way they're using the power is to simply give candy before dinner anytime it's asked.
And that's absolutely destruction of black America.
So I'm not sure if that explains some of what you see, or most of it, or how much.
But it's definitely a hypothesis that seems like it fits observable facts and is compatible with how humans are designed.
I can tell when you get a little quiet that maybe what I'm saying is sinking in a little bit.
Because I'm not getting the immediate kickback that I usually get from most topics.
All right.
Who can't we mention?
Yeah, I think we're done for today.
Best livestream you've seen in a long time.
Export Selection